perm filename CSLI.86[BB,DOC] blob
sn#831626 filedate 1987-01-06 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00174 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00026 00002 ∂25-Sep-85 1412 @SU-CSLI.ARPA:chertok%ucbcogsci@Berkeley.EDU UCB Cognitive Science Seminar--Oct. 1
C00031 00003 ∂25-Sep-85 1738 EMMA@SU-CSLI.ARPA Newsletter September 26, No. 47
C00053 00004 ∂02-Oct-85 1248 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 8 (Terry Winograd, Stanford)
C00059 00005 ∂02-Oct-85 1726 EMMA@SU-CSLI.ARPA addendum to the newsletter
C00060 00006 ∂02-Oct-85 1744 EMMA@SU-CSLI.ARPA Newsletter October 3, No. 48
C00069 00007 ∂03-Oct-85 1236 WINOGRAD@SU-CSLI.ARPA Environments group - Monday 12:00pm
C00074 00008 ∂09-Oct-85 1656 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU Cognitive Science Seminar--Oct. 15 (Ron Kaplan, Xerox PARC & Stanford)
C00081 00009 ∂09-Oct-85 1703 EMMA@SU-CSLI.ARPA Newsletter October 10, No. 49
C00109 00010 ∂16-Oct-85 1425 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 22, 1985
C00116 00011 ∂23-Oct-85 0749 @SU-CSLI.ARPA:RPERRAULT@SRI-AI.ARPA Talk by Bill Rounds
C00119 00012 ∂23-Oct-85 1633 @SU-CSLI.ARPA:admin@cogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 29, 1985
C00124 00013 ∂23-Oct-85 1733 EMMA@SU-CSLI.ARPA Newsletter October 24, No. 51
C00148 00014 ∂24-Oct-85 0819 EMMA@SU-CSLI.ARPA Today's CSLI seminar
C00149 00015 ∂30-Oct-85 0931 PHILOSOPHY@SU-CSLI.ARPA Josh Cohen
C00150 00016 ∂30-Oct-85 1732 EMMA@SU-CSLI.ARPA Newsletter October 31, No. 52
C00172 00017 ∂01-Nov-85 0631 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 5
C00178 00018 ∂04-Nov-85 2048 @SU-CSLI.ARPA:dirk@SU-PSYCH Psychology Department Friday Seminar.
C00183 00019 ∂07-Nov-85 0646 @SU-CSLI.ARPA:emma@csli-whitehead CSLI Newsletter
C00185 00020 ∂07-Nov-85 0946 EMMA@SU-CSLI.ARPA re: newsletter
C00187 00021 ∂07-Nov-85 1726 EMMA@SU-CSLI.ARPA Newsletter November 7, No. 1
C00208 00022 ∂08-Nov-85 1647 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 12 (R. Wilensky, UCB)
C00214 00023 ∂12-Nov-85 0835 EMMA@SU-CSLI.ARPA TINLunch
C00216 00024 ∂13-Nov-85 1758 EMMA@SU-CSLI.ARPA Newsletter November 14, No. 2
C00235 00025 ∂14-Nov-85 0830 EMMA@SU-CSLI.ARPA Newsletter addition
C00237 00026 ∂14-Nov-85 2023 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 19 (R. Alterman, UCB)
C00243 00027 ∂20-Nov-85 1814 EMMA@SU-CSLI.ARPA Newsletter November 21, No. 3
C00266 00028 ∂21-Nov-85 0933 EMMA@SU-CSLI.ARPA Newsletter addition
C00267 00029 ∂21-Nov-85 1754 WINOGRAD@SU-CSLI.ARPA No ENVIRONMENTS meeting until Dec 9
C00269 00030 ∂23-Nov-85 0352 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 26 (E. Clark, Stanford)
C00274 00031 ∂25-Nov-85 1600 EMMA@SU-CSLI.ARPA The Next TINLunch
C00277 00032 ∂02-Dec-85 0947 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Dec. 3, 1985
C00284 00033 ∂04-Dec-85 1702 EMMA@SU-CSLI.ARPA Newsletter December 5, No. 4
C00300 00034 ∂05-Dec-85 1223 EMMA@SU-CSLI.ARPA Newsletter correction
C00307 00035 ∂09-Dec-85 1720 @SU-CSLI.ARPA:WALDINGER@SRI-AI.ARPA seminar on program transformation wednes, 3:45
C00309 00036 ∂11-Dec-85 1752 EMMA@SU-CSLI.ARPA Newsletter December 12, No. 5
C00326 00037 ∂19-Dec-85 1533 EMMA@SU-CSLI.ARPA Newsletter December 19, No. 6
C00338 00038 ∂06-Jan-86 1445 JAMIE@SU-CSLI.ARPA Thursday Events
C00339 00039 ∂08-Jan-86 1313 EMMA@SU-CSLI.ARPA This week's TINLunch
C00343 00040 ∂08-Jan-86 1758 EMMA@SU-CSLI.ARPA Newsletter January 9, No. 7
C00374 00041 ∂15-Jan-86 1739 EMMA@SU-CSLI.ARPA Newsletter January 16, No. 8
C00386 00042 ∂22-Jan-86 1811 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Jan. 28, (Andrea diSessa,UCB)
C00392 00043 ∂22-Jan-86 1823 EMMA@SU-CSLI.ARPA Newsletter January 23, No. 9
C00405 00044 ∂29-Jan-86 1647 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 4, 1986
C00411 00045 ∂29-Jan-86 1803 EMMA@SU-CSLI.ARPA Calendar Vol. 1, No. 1
C00418 00046 ∂30-Jan-86 0924 EMMA@SU-CSLI.ARPA CSLI mailing lists
C00420 00047 ∂05-Feb-86 1605 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
C00427 00048 ∂05-Feb-86 2018 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
C00434 00049 ∂06-Feb-86 0829 EMMA@SU-CSLI.ARPA Calendar February 6, No. 2
C00442 00050 ∂06-Feb-86 0842 EMMA@SU-CSLI.ARPA Correction Calendar
C00443 00051 ∂12-Feb-86 1045 @SU-CSLI.ARPA:BrianSmith.pa@Xerox.COM CPSR Annual Meeting: March 1
C00449 00052 ∂12-Feb-86 1408 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU Berkeley Linguistics Society's 12th Annual Meeting
C00459 00053 ∂12-Feb-86 1748 EMMA@SU-CSLI.ARPA Calendar February 13, No. 3
C00476 00054 ∂12-Feb-86 1758 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 18 (Michael Silverstein)
C00482 00055 ∂13-Feb-86 1956 @SU-CSLI.ARPA:Zaenen.pa@Xerox.COM David Dowty's talk
C00486 00056 ∂19-Feb-86 1725 EMMA@SU-CSLI.ARPA Calendar February 20, No. 4
C00504 00057 ∂20-Feb-86 1527 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 25 (F. Reif)
C00510 00058 ∂24-Feb-86 0910 EMMA@SU-CSLI.ARPA Calendar update
C00516 00059 ∂24-Feb-86 1439 EMMA@SU-CSLI.ARPA re: Calendar update
C00517 00060 ∂26-Feb-86 1853 EMMA@SU-CSLI.ARPA Calendar February 27, No. 5
C00534 00061 ∂27-Feb-86 1507 EMMA@SU-CSLI.ARPA Calendar Addition
C00536 00062 ∂27-Feb-86 1529 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 4 (Curtis Hardyck)
C00542 00063 ∂27-Feb-86 1548 @SU-CSLI.ARPA:GAIFMAN@SRI-AI.ARPA Gaifman's talk today
C00544 00064 ∂03-Mar-86 1245 @SU-CSLI.ARPA:Bush@SRI-KL.ARPA housing
C00546 00065 ∂04-Mar-86 0918 CHRIS@SU-CSLI.ARPA Honda civic with lights on.
C00547 00066 ∂04-Mar-86 1531 @SU-CSLI.ARPA:GAIFMAN@SRI-AI.ARPA "I'm talking nonsense" -supervaluations
C00550 00067 ∂05-Mar-86 1709 EMMA@SU-CSLI.ARPA Calendar, March 6, No. 6
C00561 00068 ∂06-Mar-86 0943 EMMA@SU-CSLI.ARPA Calendar update
C00563 00069 ∂06-Mar-86 1011 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 11 (Carlota Smith)
C00570 00070 ∂12-Mar-86 1015 EMMA@SU-CSLI.ARPA Tomorrow's CSLI colloquium
C00571 00071 ∂12-Mar-86 1641 EMMA@SU-CSLI.ARPA Calendar, March 13, No. 7
C00585 00072 ∂12-Mar-86 1652 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 18 (John Haviland)
C00593 00073 ∂13-Mar-86 0920 INGRID@SU-CSLI.ARPA Garage Sale
C00595 00074 ∂13-Mar-86 1027 @SU-CSLI.ARPA:JROBINSON@SRI-WARBUCKS.ARPA Re: Garage Sale
C00597 00075 ∂13-Mar-86 1046 POSER@SU-CSLI.ARPA Re: Garage Sale
C00598 00076 ∂13-Mar-86 1059 INGRID@SU-CSLI.ARPA Garage Sale
C00599 00077 ∂17-Mar-86 1706 EMMA@SU-CSLI.ARPA Friends Mailing List
C00601 00078 ∂17-Mar-86 1750 EMMA@SU-CSLI.ARPA CSLI Monthly
C00603 00079 ∂17-Mar-86 1823 EMMA@SU-CSLI.ARPA re: CSLI Monthly
C00604 00080 ∂18-Mar-86 1711 EMMA@SU-CSLI.ARPA CSLI Monthly, part I
C00622 00081 ∂18-Mar-86 1734 EMMA@SU-CSLI.ARPA Old Stanford phone numbers
C00624 00082 ∂18-Mar-86 1821 EMMA@SU-CSLI.ARPA CSLI Monthly, part II
C00647 00083 ∂18-Mar-86 1913 EMMA@SU-CSLI.ARPA CSLI Monthly, part III
C00664 00084 ∂18-Mar-86 2000 EMMA@SU-CSLI.ARPA CSLI Monthly, part IV
C00678 00085 ∂19-Mar-86 1705 EMMA@SU-CSLI.ARPA Calendar, March 20, No. 8
C00685 00086 ∂26-Mar-86 1746 EMMA@SU-CSLI.ARPA Calendar, March 27, No. 9
C00695 00087 ∂02-Apr-86 1752 EMMA@SU-CSLI.ARPA Calendar, April 3, No. 10
C00713 00088 ∂04-Apr-86 0911 EMMA@SU-CSLI.ARPA CSLI: Late Announcement
C00715 00089 ∂09-Apr-86 1722 EMMA@SU-CSLI.ARPA Calendar, April 10, No. 11
C00724 00090 ∂14-Apr-86 1817 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2
C00725 00091 ∂16-Apr-86 1813 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 1
C00746 00092 ∂16-Apr-86 1911 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 2
C00763 00093 ∂16-Apr-86 2047 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 3
C00780 00094 ∂16-Apr-86 2142 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 4
C00806 00095 ∂16-Apr-86 2251 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 5
C00831 00096 ∂16-Apr-86 2354 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 6
C00858 00097 ∂17-Apr-86 0038 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 7 (and last)
C00876 00098 ∂17-Apr-86 0118 EMMA@SU-CSLI.ARPA Calendar, April 17, No. 12
C00884 00099 ∂23-Apr-86 1813 EMMA@SU-CSLI.ARPA Calendar, April 24, No. 13
C00894 00100 ∂25-Apr-86 0947 EMMA@SU-CSLI.ARPA Logic seminar
C00896 00101 ∂28-Apr-86 1000 EMMA@SU-CSLI.ARPA CSLI Calendar update
C00900 00102 ∂30-Apr-86 1803 EMMA@SU-CSLI.ARPA Calendar, May 1, No. 14
C00918 00103 ∂01-May-86 1419 EMMA@SU-CSLI.ARPA Calendar updates
C00920 00104 ∂07-May-86 1715 EMMA@SU-CSLI.ARPA Calendar, May 8, No. 15
C00928 00105 ∂08-May-86 1413 EMMA@SU-CSLI.ARPA Late Announcement
C00929 00106 ∂09-May-86 0907 EMMA@SU-CSLI.ARPA Psychology Seminar
C00932 00107 ∂13-May-86 0937 EMMA@SU-CSLI.ARPA Van Nguyen talk
C00936 00108 ∂14-May-86 1710 EMMA@SU-CSLI.ARPA Calendar, May 15, No. 16
C00943 00109 ∂15-May-86 1704 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 1
C00959 00110 ∂15-May-86 1751 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 2
C00983 00111 ∂15-May-86 1900 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 3
C01005 00112 ∂15-May-86 2019 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
C01030 00113 ∂15-May-86 2024 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
C01055 00114 ∂15-May-86 2029 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
C01080 00115 ∂15-May-86 2034 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
C01105 00116 ∂15-May-86 2042 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
C01130 00117 ∂15-May-86 2052 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
C01151 00118 ∂15-May-86 2057 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
C01172 00119 ∂15-May-86 2103 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
C01193 00120 ∂15-May-86 2113 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
C01214 00121 ∂15-May-86 2136 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
C01240 00122 ∂15-May-86 2141 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
C01266 00123 ∂15-May-86 2147 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
C01292 00124 ∂15-May-86 2152 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
C01318 00125 ∂15-May-86 2210 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 7
C01330 00126 ∂16-May-86 0922 EMMA@SU-CSLI.ARPA CSLI Calendar update
C01332 00127 ∂16-May-86 1020 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
C01353 00128 ∂16-May-86 1026 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
C01379 00129 ∂20-May-86 1551 EMMA@SU-CSLI.ARPA Calendar update
C01383 00130 ∂21-May-86 1800 JAMIE@SU-CSLI.ARPA Calendar, May 22, No. 17
C01400 00131 ∂28-May-86 1725 JAMIE@SU-CSLI.ARPA Calendar, May 29, No. 18
C01405 00132 ∂02-Jun-86 0846 JAMIE@SU-CSLI.ARPA [Carl Pollard <POLLARD@SU-CSLI.ARPA>: ESCOL 86]
C01408 00133 ∂04-Jun-86 1840 JAMIE@SU-CSLI.ARPA Calendar, June 5, No. 19
C01418 00134 ∂11-Jun-86 1537 EMMA@SU-CSLI.ARPA Calendar, June 12, No. 20
C01421 00135 ∂24-Jun-86 1615 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 1
C01451 00136 ∂24-Jun-86 1748 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 2
C01490 00137 ∂24-Jun-86 1904 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol 1., No. 4, part 3
C01520 00138 ∂24-Jun-86 2001 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 4
C01538 00139 ∂24-Jun-86 2114 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 5
C01572 00140 ∂24-Jun-86 2230 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 6
C01601 00141 ∂24-Jun-86 2326 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 7
C01618 00142 ∂14-Jul-86 0947 EMMA@CSLI.STANFORD.EDU [Richard Waldinger <WALDINGER@SRI-AI.ARPA>: talk: program transformation, tuesday]
C01624 00143 ∂18-Aug-86 1322 EMMA@CSLI.STANFORD.EDU [coraki!pratt@Sun.COM (Vaughan Pratt): Seminar: Wu Wen-tsun, "Mechanization of Geometry"]
C01628 00144 ∂01-Oct-86 1818 EMMA@CSLI.STANFORD.EDU Calendar, October 2, No. 1
C01634 00145 ∂03-Oct-86 0906 EMMA@CSLI.STANFORD.EDU Late Newsletter Entry
C01639 00146 ∂08-Oct-86 1854 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 9, No. 2
C01649 00147 ∂15-Oct-86 1753 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 16, No. 3
C01656 00148 ∂16-Oct-86 1734 EMMA@CSLI.STANFORD.EDU CSLI Monthly
C01657 00149 ∂17-Oct-86 1431 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 3
C01673 00150 ∂17-Oct-86 1434 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 1:1, part 1
C01697 00151 ∂17-Oct-86 1448 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 4
C01720 00152 ∂17-Oct-86 1453 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 5
C01729 00153 ∂17-Oct-86 1507 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 6
C01749 00154 ∂17-Oct-86 1522 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 7
C01761 00155 ∂17-Oct-86 1533 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 8 (and last)
C01779 00156 ∂17-Oct-86 1539 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2, part 2
C01797 00157 ∂17-Oct-86 1931 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2, part 2
C01815 00158 ∂23-Oct-86 0936 EMMA@CSLI.STANFORD.EDU CSLI Calendar
C01816 00159 ∂23-Oct-86 1147 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 23, No. 4
C01828 00160 ∂28-Oct-86 1244 EMMA@CSLI.STANFORD.EDU Psychology Colloquium
C01829 00161 ∂30-Oct-86 1456 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 30, No. 5
C01835 00162 ∂05-Nov-86 1835 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 6, No. 6
C01845 00163 ∂12-Nov-86 1647 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 13, No. 7
C01853 00164 ∂19-Nov-86 1750 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 20, No. 8
C01859 00165 ∂24-Nov-86 1223 EMMA@CSLI.STANFORD.EDU CSLI Monthly
C01861 00166 ∂25-Nov-86 1732 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 1
C01877 00167 ∂25-Nov-86 1822 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 2
C01896 00168 ∂25-Nov-86 1931 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 3
C01918 00169 ∂25-Nov-86 2112 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 4
C01955 00170 ∂25-Nov-86 2157 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 5
C01972 00171 ∂25-Nov-86 2307 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 6
C02011 00172 ∂25-Nov-86 2346 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 7 and last
C02022 00173 ∂03-Dec-86 1753 EMMA@CSLI.STANFORD.EDU CSLI Calendar, December 4, No. 9
C02032 00174 ∂10-Dec-86 1826 EMMA@CSLI.STANFORD.EDU CSLI Calendar, December 11, No. 10
C02047 ENDMK
C⊗;
∂25-Sep-85 1412 @SU-CSLI.ARPA:chertok%ucbcogsci@Berkeley.EDU UCB Cognitive Science Seminar--Oct. 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 25 Sep 85 14:10:57 PDT
Received: from UCB-VAX.ARPA by SU-CSLI.ARPA with TCP; Wed 25 Sep 85 14:07:32-PDT
Received: by UCB-VAX.ARPA (5.26/5.9)
id AA04219; Wed, 25 Sep 85 14:08:06 PDT
Received: by ucbcogsci.ARPA (5.5/5.7)
id AA28422; Wed, 25 Sep 85 14:09:11 PDT
Date: Wed, 25 Sep 85 14:09:11 PDT
From: chertok%ucbcogsci@Berkeley.EDU (Paula Chertok)
Message-Id: <8509252109.AA28422@ucbcogsci.ARPA>
To: cogsci-friends%ucbcogsci@Berkeley.EDU
Subject: UCB Cognitive Science Seminar--Oct. 1
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar -- IDS 237A
TIME: Tuesday, October 1, 11:00 - 12:30
PLACE: 240 Bechtel Engineering Center
(followed by)
DISCUSSION: 12:30 - 1:30 in 200 Building T-4
SPEAKER: David Rumelhart, Institute for Cognitive
Science, UCSD
TITLE: ``Parallel Distributed Processing: Explora-
tions in the Microstructure of Cognition''
Parallel Distributed Processing (PDP) is the name which I and
my colleagues at San Diego have given to the class of
neurally-inspired models of cognition we have been studying.
We have applied this class of "connectionist" models to a
variety of domains including perception, memory, language
acquisition and motor control. I will briefly present a gen-
eral framework for the class of PDP models, show how these
models can be applied in the case of acquisiton of verb mor-
phology, and show how such macrostructural concepts as the
schema can be seen as emerging from the microstructure of PDP
models. Implications of the PDP perspective for our under-
standing of cognitive processes will be discussed.
----------------------------------------------------------------
UPCOMING TALKS
Oct 8: Terry Winograd, Computer Science, Stanford
Oct 15: Ron Kaplan, Xerox PARC
Oct 22: Lotfi Zadeh, Computer Science, UCB
Oct 29: Mardi Horowitz, Psychiatry, UCSF
Nov 5: Edward Zalta, CSLI, Stanford
Nov 12: Robert Wilensky, Computer Science, UCB
Nov 19: Richard Alterman, Computer Science, UCB
Nov 26: Eve Clark, Linguistics, Stanford
Dec 3: Bernard Baars, Langley Porter, UCSF
* * * * *
ELSEWHERE ON CAMPUS
Steven Christman will be speaking on ``Visual Persistence'' on
Friday, October 4, 1985, at 4:00 p.m. in the Beach Room, 3105
Tolman Hall, UCB.
∂25-Sep-85 1738 EMMA@SU-CSLI.ARPA Newsletter September 26, No. 47
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 25 Sep 85 17:38:40 PDT
Date: Wed 25 Sep 85 16:54:08-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter September 26, No. 47
To: friends@SU-CSLI.ARPA
Tel: 497-3479
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
September 26, 1985 Stanford Vol. 2, No. 47
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, September 26, 1985
12 noon TINLunch
Ventura Hall ``The Concept of Supervenience''
Conference Room Discussion led by Carol Cleland
2:15 p.m. CSLI Talk
Ventura Hall No talk this week
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, October 3, 1985
12 noon TINLunch
Ventura Hall ``Idealized Cognitive Models'' and ``Metonymic Models''
Conference Room Sections 4, 5 of ``Women, Fire, and Dangerous Things''
by George Lakoff
Discussion led by Douglas Edwards
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall ``Notes from the STASS Underground''
Seminar Room David Israel, CSLI and SRI
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
THIS YEAR'S THURSDAY ACTIVITIES
CSLI's year will be starting next Thursday, October 3, and several
changes have been made.
TINLunches will be organized by Chris Menzel and Mats Rooth, two
CSLI postdoctoral fellows. They will continue to meet at noon in
the Ventura Conference room.
Thursday Seminars will have a different format this year and will
consist of either individual presentations from the postdocs or a
presentation by one of the new projects of its goals and progress.
Thursday Colloquia will be rarer and of more general interest.
Each project will be responsible for one colloquium, and we hope to
have three colloquia a quarter. Time and location of the colloquia
may vary.
Next week's newsletter will contain a list of the new projects and a
tentative calendar for the Fall quarter.
!
Page 2 CSLI Newsletter September 26, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ABSTRACT FOR NEXT WEEK'S TINLUNCH
``Idealized Cognitive Models'' and ``Metonymic Models''
Sections 4, 5 of ``Women, Fire, and Dangerous Things''
According to Lakoff, many words are understood by reference to
``Idealized Cognitive Models'' (ICMs) which describe the ideal
circumstances in which the phenomena these words refer to are
conceived to exist. Some uses of a word can only be understood by
treating the word's ICM as true even when it is known to be false in
general. Other uses modify the word's meaning by more or less
explicitly calling the ICM in question or by focusing on cases to
which the ICM clearly fails to apply.
Thus linguistic puzzles can arise. For instance ``bachelor'' is
often defined as ``unmarried man,'' and ``to lie'' as ``to make a
false statement,'' even though it is well known that these terms are
not coextensive with their definitions. When a word is defined, its
ICM is taken for granted, but when a purported example is judged,
failure of applicability of the ICM can make the purported example
illegitimate or at least atypical. The ICMs for ``bachelor'' and
``lie'' fail partly or totally for priests, children, polygamists,
misleading true statements, polite nothings, and accidental errors.
Syncategorematic noun modifiers often affect the ICM. Thus we get
``social lie,'' ``white lie,'' ``eligible bachelor'' (this one
reinforces the ICM), ``foster mother,'' ``surrogate mother,'' and so
on.
ICMs are interesting in that they seem to be used in reasoning
generally, not just in lexical semantics. They are akin to, but not
identical with, various constructs developed for artificial
intelligence, such as frames, scripts, contexts, data pools, etc.
--Douglas Edwards
←←←←←←←←←←←←
ABSTRACT OF NEXT WEEK'S CSLI SEMINAR
``Notes from the STASS Underground''
I will try to explain the meaning and import of one of the hottest
acronyms at CSLI -- ``STASS.'' In particular, I will try to explain
why there should be a Situation Theory as well as a Situation
Semantics. --David Israel
←←←←←←←←←←←←
CSLI TALK
``Verbs and Time''
Dorit Abusch, Tel-Aviv University
Tuesday, October 1, 1 pm, Ventura Conference Room
In ``Word Meaning and Montague Grammar,'' David Dowty analyzed
aspectual clauses in terms of an ``aspectual calculus'' consisting of
stative predicates and operators such as BECOME and CAUSE. For
instance, achievements, including many morphological inchoatives, are
analyzed as having the form lambda x[Become(P(x))]. Accomplishments,
including many morphological causatives, are analyzed in terms of
CAUSE. Dowty and Lauri Carlson noted that some inchoatives, such as
(the verb) ``cool,'' meet the test for process verbs, I discuss these
inchoatives, and similar causatives. The relation between the
operators and the verb classification is complex. I argue that the
classification breaks down for certain causatives, such as the
transitive versions of ``gallop'' and ``darken.''
!
Page 3 CSLI Newsletter September 26, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AFA SEMINAR
This quarter there will be a small informal seminar going through
Peter Aczel's work on the anti-foundation axiom (AFA) in set theory,
together with some of the applications found by people here at CSLI.
We will start at the beginning, but assume familiarity with the
cumulative hierarchy and ZFC. The seminar will be Thursdays at 4:15
when there is no CSLI colloquium, in the Ventura Conference room. Jon
Barwise will give a brief introduction on September 26, and then we
will organize the rest of the quarter. If you would like to be added
to the AFA mailing list, contact Westerstahl@csli.
←←←←←←←←←←←←
NEW PROJECT MEETING ON ENVIRONMENTS
Mondays 1-2 in the trailer classroom, Ventura
Beginning Monday, September 30 there will be a weekly meeting on
environments for working with symbolic structures (this includes
programming environments, specification environments, document
preparation environments, ``linguistic workstations,'' and
grammar-development environments). As a part of doing our research,
many of us at CSLI have developed such environments, sometimes as a
matter of careful design, and sometimes by the seat of the pants. In
this meeting we will present to each other what we have done, and also
look at work done elsewhere (both through guest speakers and reading
discussions).
The goal is to look at the design issues that come up in building
environments and to see how they have been approached in a variety of
cases. We are not concerned with the particular details (``pop-up
menus are/aren't better than pull-down menus'') but with more
fundamental problems. For example:
What is the nature of the underlying structure the environment
supports: chunks of text? a data-base of relations? a tree or graph
structure? How is this reflected in the basic mode of operation
for the user?
How does the user understand the relation between objects (and
operations on them) that appear on the visible representation
(screen and/or hardcopy) and the corresponding objects (and
operations) on some kind of underlying structure? How is this
maintained in a situation of multiple presentations (different
views and/or multiple windows)? How is it maintained in the face
of breakdown (system failure or catastrophic user error in the
middle of an edit, transfer, etc.)?
Does the environment deal with a distributed network of storage and
processing devices? If so, does it try to present some kind of
seamless ``information space'' or does it provide a model of
objects and operations that deals with moving things (files,
functions, etc.) from one ``place'' to another, where different
places have relevant different properties (speed of access,
security, shareability, etc.)?
!
Page 4 CSLI Newsletter September 26, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
How is consistency maintained between separate objects that are
conceptually linked (source code and object code, formatter source
and printer-ready files, grammars and parse-structures generated
from them, etc.)? To what extent is this simply left to user
convention, supported by bookkeeping tools, or automated?
What is the model for change of objects over time? This includes
versions, releases, time-stamps, reference dates, change logs,
etc., How is information about temporal and derivational
relationships supported within the system?
What is the structure for coordination of work? How is access to
the structures regulated to prevent ``stepping on each other's
toes,'' to facilitate joint development, to keep track of who needs
to do what when?
Lurking under these are the BIG issues of ontology, epistemology,
representation, and so forth. Hopefully our discussions on a more
down-to-earth level will be guided by a consideration of the larger
picture and will contribute to our understanding of it.
The meeting is open to anyone who wishes to attend. Topics will be
announced in advance in the newsletter. The first meeting will be
devoted to a general discussion of what should be addressed and to
identifying the relevant systems (and corresponding people) within
CSLI, and within the larger (Stanford, Xerox, SRI) communities in
which it exists. --Terry Winograd
←←←←←←←←←←←←
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
``Cree Verb Inflection: Linking Features to Grammatical Functions''
Summary of the meeting on September 12
Cree (Algonquian) is a non-configurational language in which
grammatical functions are encoded by means of a complicated system of
verbal inflection. The verb has ten inflectional affix positions; no
single position is dedicated to a particular grammatical function.
The shape of the person and number affixes is the same for both
subject and object. The task of linking person and number feature
values with the appropriate grammatical function falls to a set of
morphemes traditionally called ``theme signs.''
The talk focussed on the role of the theme signs. Some recent
theoretical accounts have analyzed the theme signs as marking a voice
opposition; on these accounts, the theme signs would be derivational,
rather than inflectional. A subset of the theme signs would mark the
application of a rule like passive, or a rule of ergative relinking,
in which the theme argument is linked to subject, and the agent
argument is linked to object. However, syntactic tests (copying to
object, quantifier float, complement control) show that the passive
and the ergative relinking hypotheses must both be rejected.
In Dahlstrom's analysis, the theme signs are inflectional, acting
as a filter on possible linkings of person and number features to
grammatical functions. The other inflectional affixes carry specific
feature values for person and number, but are unspecified for
grammatical function. Ungrammatical linkings of feature values to
grammatical functions are ruled out by general conditions of
completeness, coherence, and consistency. --Amy Dahlstrom
!
Page 5 CSLI Newsletter September 26, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEW CSLI REPORTS
Report No. CSLI-85-31, ``A Formal Theory of Knowledge and Action''
by Robert C. Moore, and Report No. CSLI-85-32, ``Finite State
Morphology: A Review of Koskenniemi'' by Gerald Gazdar, have just been
published. These reports may be obtained by writing to David Brown,
CSLI, Ventura Hall, Stanford, CA 94305 or Brown@SU-CSLI.
-------
∂02-Oct-85 1248 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 8 (Terry Winograd, Stanford)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Oct 85 12:48:39 PDT
Received: from UCB-VAX.ARPA by SU-CSLI.ARPA with TCP; Wed 2 Oct 85 12:45:30-PDT
Received: by UCB-VAX.ARPA (5.28/5.11)
id AA23805; Wed, 2 Oct 85 12:45:01 PDT
Received: by ucbcogsci.ARPA (5.5/5.7)
id AA22854; Wed, 2 Oct 85 12:46:31 PDT
Date: Wed, 2 Oct 85 12:46:31 PDT
From: admin@ucbcogsci.Berkeley.EDU (Cognitive Science Program)
Message-Id: <8510021946.AA22854@ucbcogsci.ARPA>
To: cogsci-friends@ucbcogsci.Berkeley.EDU
Subject: UCB Cognitive Science Seminar--Oct. 8 (Terry Winograd, Stanford)
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar -- IDS 237A
TIME: Tuesday, October 8, 11:00 - 12:30
PLACE 240 Bechtel Engineering Center
DISCUSSION: 12:30 - 1:30 in 200 Building T-4
SPEAKER: Terry Winograd, Computer Science, Stanford University
TITLE: "What Can Cognitive Science Tell Us About Computers?"
Much work in cognitive science rests on the assumption that
there is a common form of "information processing" that under-
lies human thought and language and that also corresponds to
the ways we can program digital computers. The theory should
then be valid both for explaining the functioning of the
machines (at whatever level of "intelligence") and for under-
standing how they can be integrated into human situations and
activities.
I will argue that theories like those of current cognitive
science are based on a "rationalistic" tradition, which is
appropriate for describing the mechanics of machine operation,
but is inadequate for understanding human cognitive activity
and misleading as a guide to the design and application of
computer technology. The emphasis will be on looking at
alternatives to this tradition, as a starting point for under-
standing what computers really can do.
---------------------------------------------------------------
UPCOMING TALKS
Oct 15: Ron Kaplan, Xerox PARC
Oct 22: Lotfi Zadeh, Computer Science, UCB
Oct 29: Mardi Horowitz, Psychiatry, UCSF
Nov 5: Edward Zalta, CSLI, Stanford
Nov 12: Robert Wilensky, Computer Science, UCB
Nov 19: Richard Alterman, Computer Science, UCB
Nov 26: Eve Clark, Linguistics, Stanford
Dec 3: Bernard Baars, Langley Porter, UCSF
* * * * *
ELSEWHERE ON CAMPUS
Edward De Avila, of Linguametrics, Inc., will be speaking on
"The Status of Language Minority Students in the U.S.: Scho-
lastic Performance in Math and Science" on Monday, October 7,
1985, at 4:10 p.m. in 2515 Tolman Hall, UCB.
Boris Gasparov, of the UCB Slavic Languages and Literatures
Dept., will be speaking on "Stylistic `Shifters' in Russian"
on Tuesday, Oct. 8, 1985, at 8:00 p.m. in the Tilden Room,
Student Union Bldg., UCB.
William Cole, Cognitive Science, will be speaking on "Medical
Cognitive Graphics" on Friday, October 11, 1985, at 4:00 p.m.
in the Beach Room, 3105 Tolman Hall, UCB.
--------------------------------------------------------------
∂02-Oct-85 1726 EMMA@SU-CSLI.ARPA addendum to the newsletter
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Oct 85 17:26:06 PDT
Date: Wed 2 Oct 85 17:25:28-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: addendum to the newsletter
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Re: the tentative Fall calendar
Each project is responsible for ONE colloquium sometime during the
two to three week period listed.
-Emma Pease
-------
∂02-Oct-85 1744 EMMA@SU-CSLI.ARPA Newsletter October 3, No. 48
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Oct 85 17:43:59 PDT
Date: Wed 2 Oct 85 16:55:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter October 3, No. 48
To: friends@SU-CSLI.ARPA
Tel: 497-3479
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 3, 1985 Stanford Vol. 2, No. 48
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, October 3, 1985
12 noon TINLunch
Ventura Hall ``Idealized Cognitive Models'' and ``Metonymic Models''
Conference Room Sections 4, 5 of ``Women, Fire, and Dangerous Things''
by George Lakoff
Discussion led by Douglas Edwards
2:15 p.m. CSLI Seminar
Redwood Hall ``Notes from the STASS Underground''
Room G-19 David Israel, CSLI and SRI
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, October 10, 1985
12 noon TINLunch
Ventura Hall ``Artificial Intelligence Meets Natural Stupidity''
Conference Room by Drew McDermott
Discussion led by Roland Hausser, U. of Munich
2:15 p.m. CSLI Seminar
Redwood Hall ``Ontology and Intensionality''
Room G-19 Edward Zalta, CSLI
Discussion led by John Perry
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
TENTATIVE FALL SCHEDULE FOR THURSDAYS
THURSDAY SEMINARS
Date Person or Group responsible
10-3 Situation Theory and Situation Semantics
10-10 Zalta
10-17 Sells
10-24 Discourse, Intention and Action
10-31 Foundations of Document Preparation
11-7 Phonology and Phonetics
11-14 Finite State Morphology
11-21 Computational Models of Spoken Language
12-5 Winograd
!
Page 2 CSLI Newsletter October 3, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THURSDAY COLLOQUIA
10-3 to 10-17: Situation Theory and Situation Semantics
10-24 to 11-7: Discourse, Intention and Action
11-14 to 11-20: Phonology & Phonetics, Finite State Morphology, &
Computational Models of Spoken Language
11/21 Joe Traub, CS Dept., Columbia
←←←←←←←←←←←←
ABSTRACT FOR NEXT WEEK'S SEMINAR
``Ontology and Intensionality''
The foundations of semantics require more than just a theory of
properties, relations, and propositions. Such theories do show that
logically equivalent relations and propositions are not necessarily
identical, but they do not provide us with an explanation of modality
and tense (for which we need something like worlds and times), nor
with an explanation of the truth conditions, entailments, and
substitutivity failures involving codesignative names and descriptions
of important varieties of intensional sentences (for which we need
something like intentional objects). The theory which I have been
developing has logical axioms which generate properties, relations,
and propositions, and proper axioms which generate abstract
individuals, some of which have just the features worlds have and some
of which can help us explain intensionality by serving as intentional
objects. In the seminar, I'll show how to extend the theory to define
times and account for the many similarities between worlds and times.
Then I'll show that, given this ontology, the traditional
understanding of intensionality must be revised and that certain
classic puzzles involving modality and descriptions have a simple
solution. --Ed Zalta
←←←←←←←←←←←←
LOGIC LUNCH
On Mondays there will be an informal brown bag logic lunch in the
Philosophy Lounge, building 90, from 12 to 1, starting October 7. If
you are interested in logic, please come any time. Send questions to
Jon Barwise (barwise@su-csli).
←←←←←←←←←←←←
LOGIC SEMINAR
The Logic Seminar will resume October 7 in the mathematics seminar
room. It will meet every Monday at 4:15. Contact Sol Feferman
(SF@su-csli) for details. Information on the first seminar follows.
``Prewellordering and the Generalized Reduction Property''
Prof. Shaughan Lavine, Dept. of Mathematics, Stanford
Monday, Oct. 7, 4:15-5:30 P.M.
Room 383N (faculty lounge, 3d floor, Math. Bldg.).
-------
∂03-Oct-85 1236 WINOGRAD@SU-CSLI.ARPA Environments group - Monday 12:00pm
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 3 Oct 85 12:36:52 PDT
Date: Thu 3 Oct 85 12:35:46-PDT
From: Terry Winograd <WINOGRAD@SU-CSLI.ARPA>
Subject: Environments group - Monday 12:00pm
To: friends@SU-CSLI.ARPA
--sorry for the separate distribution but this didn't make it
in time for the newsletter this week. --t
----------------------
COMING WEEK (Monday Oct. 7): 12:00 to 1:15 in the Ventura trailer
classroom (NOTE NEW REGULAR TIME), David Levy (Xerox PARC and CSLI)
will describe his work on a theoretical foundation for document
preparation environments.
------
PREVIOUS WEEK (Sept. 30): At the first meeeting of the environments
group we set out the general directions for our discussions. We
identified some major dimensions along which to compare and examine
environments and made an initial list of examples that might be
presented. This list is very sketchy -- the random result of what
happened to come up in conversation. We are eager for further details
and suggestions (either systems for general consideration, or about
which specific people would like to talk):
Programming environments: Interlisp, Smalltalk, Cedar, [all 3 Xerox],
(Linton) [Berkeley/Stanford], Gandalf [CMU], Mentor [INRIA], ZetaLisp
[Symbolics], Kee [Intellicorp], HPRL, HPLisp [last 2 Hewlett-Packard]
Grammar development environments: LFG [CSLI], HPSG [HP], BLT [CSLI],
Specification environments: Aleph [CSLI], (Balzer)[ISI]
Language development environments: MUIR [CSLI]
Document preparation environments: (Levy) [CSLI], Notecards [Xerox]
Data access and manipulation envrionments: ?
Mathematical and logical deduction environments: MACSYMA [MIT], FOL
[Stanford]
There is a variety of application areas not as central to CSLI concerns,
but in which enviornments are built. These include VLSI design,
CAD/CAM, image manipulation, mail systems, etc. In addition, most
operating systems take on the functions of an environment, either for
use outside of applications programs or as a base within them.
So-called "intelligent agents" are one attempt to provide a uniform
environment for a particular user interacting with multiple systems.
For each kind of environment there are specific problems dealing with
the particular structures being worked with (programs, proofs, grammars,
formatted documents, etc.). There is also a framework of common
problems having to do with the basic structure of items being
manipulated (text, trees, databases, etc.), their representation on a
screen or hardcopy, interfaces for operating through that
representation, storage on one or more devices, consistency between
aspects (e.g., source and compiled code, specifications and proofs),
change over time (versions, releases, etc.), coordination of access
among a group, etc.
Our plan is to address the basic conceptual issues by looking at one
particular envrionment or group of related environments in each session.
Next week's topic will be a discussion of the theoretical foundations of
document preparation, by David Levy.
-------
∂09-Oct-85 1656 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU Cognitive Science Seminar--Oct. 15 (Ron Kaplan, Xerox PARC & Stanford)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 9 Oct 85 16:50:03 PDT
Received: from UCB-VAX.ARPA by SU-CSLI.ARPA with TCP; Wed 9 Oct 85 16:46:31-PDT
Received: by UCB-VAX.ARPA (5.28/5.11)
id AA05747; Wed, 9 Oct 85 16:47:54 PDT
Received: by ucbcogsci (5.28/5.12)
id AA16122; Wed, 9 Oct 85 16:48:25 PDT
Date: Wed, 9 Oct 85 16:48:25 PDT
From: admin@ucbcogsci.Berkeley.EDU (Cognitive Science Program)
Message-Id: <8510092348.AA16122@ucbcogsci>
To: cogsci-friends@ucbcogsci.Berkeley.EDU
Subject: Cognitive Science Seminar--Oct. 15 (Ron Kaplan, Xerox PARC & Stanford)
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar -- IDS 237A
TIME: Tuesday, October 15, 11:00 - 12:30
PLACE: 240 Bechtel Engineering Center
DISCUSSION: 12:30 - 1:30 in 200 Building T-4
SPEAKER: Ronald M. Kaplan,
Xerox Palo Alto Research Center and Center
for the Study of Language and Information,
Stanford University
TITLE: ``Interactive Modularity''
Comprehensible scientific explanations for most complex
natural phenomena are modular in character. Phenomena are
explained in terms of the operation of separate and indepen-
dent components, with relatively minor interactions. Modular
accounts of complex cognitive phenomena, such as language pro-
cessing, have also been proposed, with distinctions between
phonological, syntactic, semantic, and pragmatic modules, for
example, and with distinctions among various rules within
modules. But these modular accounts seem incompatible with
the commonplace observations of substantial interactions
across component boundaries: semantic and pragmatic factors,
for instance, can be shown to operate even before the first
couple of phonemes in an utterance have been identified.
In this talk I consider several methods of reconciling
modular descriptions in service of scientific explanation with
the apparent interactivity of on-line behavior. Run-time
methods utilize interpreters that allow on-line interleaving
of operations from different modules, perhaps including addi-
tional "scheduling" components for controlling the cross-
module flow of information. But depending on their mathemati-
cal properties, modular specifications may also be transformed
by off-line, compile-time operations into new specifications
that directly represent all possible cross-module interac-
tions. Such compilation techniques allow for run-time elimi-
nation of module boundaries and of intermediate levels of
representation. I will illustrate these techniques with exam-
ples involving certain classes of phonological rule systems
and structural correspondences in Lexical-Functional Grammar.
--------------------------------------------------------------
UPCOMING TALKS
Oct 22: Lotfi Zadeh, Computer Science, UCB
Oct 29: Mardi Horowitz, Psychiatry, UCSF
Nov 5: Edward Zalta, CSLI, Stanford
Nov 12: Robert Wilensky, Computer Science, UCB
Nov 19: Richard Alterman, Computer Science, UCB
Nov 26: Eve Clark, Linguistics, Stanford
Dec 3: Bernard Baars, Langley Porter, UCSF
--------------------------------------------------------------
∂09-Oct-85 1703 EMMA@SU-CSLI.ARPA Newsletter October 10, No. 49
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 9 Oct 85 17:03:09 PDT
Date: Wed 9 Oct 85 16:51:08-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter October 10, No. 49
To: friends@SU-CSLI.ARPA
Tel: 497-3479
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 10, 1985 Stanford Vol. 2, No. 49
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, October 10, 1985
12 noon TINLunch
Ventura Hall ``Artificial Intelligence Meets Natural Stupidity''
Conference Room by Drew McDermott
Discussion led by Roland Hausser, U. of Munich
(Abstract on page 1)
2:15 p.m. CSLI Seminar
Redwood Hall ``Ontology and Intensionality''
Room G-19 Edward Zalta, CSLI
Discussion led by John Perry
m3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, October 17, 1985
12 noon TINLunch
Ventura Hall ``Economy of Speech Gestures''
Conference Room by Bjorn Lindblom (who will be present)
Discussion led by Bill Poser
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Redwood Hall ``On the Notion of `Logophoricity' ''
Room G-19 Peter Sells, CSLI
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
ABSTRACT FOR THIS WEEK'S TINLUNCH
Artificial Intelligence Meets Natural Stupidity
McDermott discusses three `mistakes', or rather bad habits, which are
frequent in A.I. work. He speaks from his own experience and cites
several illuminating and amusing examples from the literature. In this
TINLunch I will be discussing his thoughts on treating reference in
A.I., which are discussed in the section entitled `unnatural
language'. --Roland Hausser
!
Page 2 CSLI Newsletter October 10, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ABSTRACT FOR NEXT WEEK'S TINLUNCH
Economy of Speech Gestures
This paper discusses a functionalist approach to phonetics and
phonology in which the properties of phonological systems are to be
deduced from biological and social factors rather than from from
axioms governing a language-particular formal system. --Bill Poser
←←←←←←←←←←←←
ABSTRACT FOR NEXT WEEK'S SEMINAR
On the Notion of `Logophoricity'
The notion of `logophoricity' was introduced in studies of African
languages in which a morphologically distinct `logophoric' pronoun has
a distribution distinct from other pronouns, used with predicates of
communication and consciousness. More recently, this notion has been
used in accounts of anaphora with non-clause-bounded reflexive
pronouns, as are found in the Scandinavian languages, and Japanese.
Such analyses propose a feature [+log] which is supposed to be
specified on certain NPs by certain predicates. I will present the
beginnings of a formal construction of the notion of `logophoricity'
using the Discourse Representation Structures framework developed by
Hans Kamp. I will propose that there is no such thing as
logophoricity per se, but rather that it stems out of the interaction
of two more primitive notions: the person with respect to whose
consciousness (or `SELF') the report is made, and the person from
whose point-of-view the report is made (the `PIVOT'). I will show how
this system extends to certain facts (from Japanese) which are not
analyzable with the simple feature [+log], and how it enables one to
characterize cross-linguistic variation in what counts for
`logophoricity'. --Peter Sells
←←←←←←←←←←←←
ENVIRONMENTS GROUP MEETING
Monday, October 14, noon, Ventura Trailer Classroom
David Levy (Xerox PARC and CSLI) will continue to describe his work
on a theoretical foundation for document preparation environments.
Specifically, he will describe in some detail the theory of marking
itself, and its relevance to various computer systems. We will
discuss some points that came up in questions, such as the relation of
``indirect marking'' to different kinds of tools, the contrast between
a psychological theory (how people think when they use a system) and
an ontological account (of the basic objects, actions, and
relationships that are available for them to work with), and the
problems of multiple levels of representation (e.g., a macro command
stands for a sequence of ``characters'' which in turn represent
various ``figures'', etc.).
See the summary of the meeting on October 7 (later in this
newsletter) for more information.
←←←←←←←←←←←←
LINGUISTICS COLLOQUIUM
``Underspecification and Opacity''
Douglas Pulleyblank, USC
Tuesday, October 15, Bldg. 200, Rm. 217, 3:15 p.m.
!
Page 3 CSLI Newsletter October 10, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
LOGIC SEMINAR
``Computability of Standard Processes in Analysis and Physics''
Marian Pour-El, University of Minnesota
Monday, October 14, Noon to 1:15
Ventura Hall Seminar Room
Note the change of time and place.
The regular meeting time of this seminar has been changed to
Friday, noon. We will meet alternate weeks beginning Friday, October
25. --Sol Feferman
←←←←←←←←←←←←
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
Summary of the meeting on September 26
Farrell Ackerman gave a talk entitled ``Brackets and Branches:
Phrasal Verbs.'' Assuming a provisional (and, perhaps, traditional)
definition of phrasal verbs as morpholexically composed entities whose
constitutive pieces exhibit syntactic independence, the discussion
focused on the syntactic and lexical aspects of these Janus-like
elements.
From a syntactic perspective the interaction of phrasal verbs and
rule of V(erb)-movement in, e.g., Vata (a Kru language analyzed in
Koopman 1983) was discussed. V-movement in Vata is one particular case
of V-movement motivated along similar lines, i.e., hypothesizing a V
final d-structure representation, for the Germanic languages German
and Dutch. Evidence of similar syntactic discontinuities between
particles (called `preverbs') and associated verb stems was given for
the Ugric language Hungarian. On the other hand, it was suggested
that in this instance it is not the V but the particle which `moves.'
After a presentation of the preverb-verb sequence possibilities in
Hungarian discussion turned to the lexical aspects of preverb-verb
collocations.
From a lexical perspective the set of Hungarian preverbs can be,
roughly, divided into two groups: prefixes and arguments. The
prefixes (minus a class of intriguing exceptions which were not
discussed ) are categorially indeterminate and do not exhibit
inflectional morphology indicating any relation of the prefix with the
verb. Arguments, in contrast, are categorially determinable (in fact,
are typically instantiated by and restricted to appear as a major
lexical category) and bear inflectional morphology indicating their
grammatical relation to the verb. The combination of prefix + verb
was hypothesized to be a type of verb derivation via prefixation while
argument + verb was regarded as a type of lexical compounding.
Evidence for the lexical nature of these phrasal verbs was taken to be
their ability to serve as input for further derivational processes
such as nominalization and adjectivilization.
The assumption that phrasal verbs are lexical compositions leads to
problems for the so-called Lexical Integrity Hypothesis, the procedure
of Bracket Erasure in Lexical Phonology and, in general, leads to what
have become know as `Bracketing Paradoxes.' It was proposed (following
Simpson 1983 and Komlosy and Ackerman 1983) that there is a process of
`bracket retention' restricted to the domain of predicate formation
which accounts for the main difference, i.e., the syntactic
separability of preverbs, in the behavior of preverbs in numerous
languages.
!
Page 4 CSLI Newsletter October 10, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF ENVIRONMENTS GROUP MEETING
September 30, 1985
At the first meeting of the environments group we set out the
general directions for our discussions. We identified some major
dimensions along which to compare and examine environments and made an
initial list of examples that might be presented. This list is very
sketchy---the random result of what happened to come up in
conversation. We are eager for further details and suggestions
(either systems for general consideration, or about which specific
people would like to talk):
Programming environments: Interlisp, Smalltalk, Cedar, [all 3 Xerox],
(Linton) [Berkeley/Stanford], Gandalf [CMU], Mentor [INRIA],
ZetaLisp [Symbolics], Kee [Intellicorp], HPRL, HPLisp [last 2
Hewlett-Packard]
Grammar development environments: LFG [CSLI], HPSG [HP], BLT [CSLI],
Specification environments: Aleph [CSLI], (Balzer)[ISI]
Language development environments: MUIR [CSLI]
Document preparation environments: (Levy) [CSLI], Notecards [Xerox]
Data access and manipulation environments:
Mathematical and logical deduction environments: MACSYMA [MIT], FOL
[Stanford]
There is a variety of application areas not as central to CSLI
concerns, but in which environments are built. These include VLSI
design, CAD/CAM, image manipulation, mail systems, etc. In addition,
most operating systems take on the functions of an environment, either
for use outside of applications programs or as a base within them.
So-called ``intelligent agents'' are one attempt to provide a uniform
environment for a particular user interacting with multiple systems.
For each kind of environment there are specific problems dealing
with the particular structures being worked with (programs, proofs,
grammars, formatted documents, etc.). There is also a framework of
common problems having to do with the basic structure of items being
manipulated (text, trees, databases, etc.), their representation on a
screen or hardcopy, interfaces for operating through that
representation, storage on one or more devices, consistency between
aspects (e.g., source and compiled code, specifications and proofs),
change over time (versions, releases, etc.), coordination of access
among a group, etc.
Our plan is to address the basic conceptual issues by looking at
one particular environment or group of related environments in each
session.
!
Page 5 CSLI Newsletter October 10, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF ENVIRONMENTS GROUP MEETING
October 7, 1985
David Levy gave an overview of his work on a theoretical basis for
document preparation environments. He demonstrated the problems with
existing ``marking environments'' which combine conflicting approaches
to text layout, drawing, and window placement. The failure to
generalize the common elements in all of these leads to greater
complexity and to blind spots that create difficulty in maintaining,
documenting, and using such systems. Many of the relevant issues
apply to older marking technologies, but the computer has two novel
properties that demand a clear and explicit theory. First, marking is
indirect---the linkage between human physical action and what appears
on the screen (or paper) is mediated by linguistic or quasi-linguistic
commands. Second, there is a clear distinction between the surface
presentation (what you see) and the internal representation (its
underlying structure). The computer, unlike earlier forms, lets you
manipulate the underlying structure directly, with possibly complex
and distributed consequences to the surface presentation.
He then showed how we might begin to develop a theory of marking
with a coherent ontological basis. For example, we need to look at
something as mundane as the ``carriage return'' as having distinct and
sometimes confused aspects: it is a character (in the standard
representation), it denotes an area of non-marked space on a page, it
indicates a possible place to split a line in normal formatting, etc.
By carefully delineating the concepts involved in these different
aspects, we can produce systems that are simpler, easier to
understand, and more amenable to generalization.
←←←←←←←←←←←←
LICS CONFERENCE
A new conference, LICS, (an acronym for ``Logic in Computer
Science'') will meet in Cambridge, Mass, June 16-18, 1986. The topics
to be covered include abstract data types, computer theorem proving
and verification, concurrency, constructive proofs as programs, data
base theory, foundations of logic programming, logic-based programming
languages, logics of programs, knowledge and belief, semantics of
programs, software specifications, type theory, etc. For a local copy
of the full call for papers, contact Jon Barwise (Barwise@CSLI) or
Joseph Goguen (Goguen@SRI-AI), members of the LICS Organizing
Committee.
!
Page 6 CSLI Newsletter October 10, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
COMMON SENSE AND NON-MONOTONIC REASONING SEMINARS
Organized by John McCarthy and Vladimir Lifschitz
Computer Science Dept., Stanford University
A series of seminars on Common Sense and Non-monotonic reasoning
will explore the problem of formalizing commonsense knowledge and
reasoning, with the emphasis on their non-monotonic aspects.
It is important to be able to formalize reasoning about physical
objects and mental attitudes, about events and actions on the basis of
predicate logic, as it can be done with reasoning about numbers,
figures, sets and probabilities. Such formalizations may lead to the
creation of AI systems which can use logic to operate with general
facts, which can deduce consequences from what they know and what they
are told and determine in this way what actions should be taken.
Attempts to formalize commonsense knowledge have been so far only
partially successful. One major difficulty is that commonsense
reasoning often appears to be non-monotonic, in the sense that getting
additional information may force us to retract some of the conclusions
made before. This is in sharp contrast to what happens in
mathematics, where adding new axioms to a theory can only make the set
of theorems bigger.
Circumscription, a transformation of logical formulas proposed by
John McCarthy, makes it possible to formalize non-monotonic reasoning
in classical predicate logic. A circumscriptive theory involves, in
addition to an axiom set, the description of a circumscription to be
applied to the axioms. Our goal is to investigate how commonsense
knowledge can be represented in the form of circumscriptive theories.
John McCarthy will begin the seminar by discussing some of the
problems that have arisen in using abnormality to formalize common
sense knowledge about the effects of actions using circumscription.
His paper Applications of Circumscription to Formalizing Common Sense
Knowledge is available from Rutie Adler 358MJH. This paper was given
in the Non-monotonic Workshop, and the present version, which is to be
published in Artificial Intelligence, is not greatly different. The
problems in question relate to trying to use the formalism of that
paper.
The seminar will replace the circumscription seminar we had last
year. If you were on the mailing list for that seminar then you will
be automatically included in the new mailing list. If you would like
to be added to the mailing list (or removed from it) send a message to
Vladimir Lifschitz (VAL@SAIL).
The first meeting is in 252MJH on Wednesday, October 30, at 2pm.
-------
∂16-Oct-85 1425 @SU-CSLI.ARPA:admin@ucbcogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 22, 1985
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Oct 85 14:25:31 PDT
Received: from UCB-VAX by SU-CSLI.ARPA with TCP; Wed 16 Oct 85 14:22:06-PDT
Received: by UCB-VAX (5.28/5.12)
id AA01415; Wed, 16 Oct 85 14:21:43 PDT
Received: by ucbcogsci (5.28/5.12)
id AA11157; Wed, 16 Oct 85 14:22:50 PDT
Date: Wed, 16 Oct 85 14:22:50 PDT
From: admin@ucbcogsci.Berkeley.EDU (Cognitive Science Program)
Message-Id: <8510162122.AA11157@ucbcogsci>
To: allmsgs@ucbcogsci.Berkeley.EDU, cogsci-friends@ucbcogsci.Berkeley.EDU
Subject: UCB Cognitive Science Seminar--Oct. 22, 1985
Cc: admin@ucbcogsci.Berkeley.EDU
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, October 22, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``Meaning, Information and Possibility''
L. A. Zadeh
Computer Science Division, U.C. Berkeley
Our approach to the connection between meaning and information
is in the spirit of the Carnap--Bar-Hillel theory of state
descriptions. However, our point of departure is the assump-
tion that any proposition, p, may be expressed as a generalized
assignment statement of the form X isr C, where X is a variable
which is usually implicit in p, C is an elastic constraint on
the values which X can take in a universe of discourse U, and
the suffix r in the copula isr is a variable whose values
define the role of C in relation to X. The principal roles are
those in which r is d, in which case C is a disjunctive con-
straint; and r is c, p and g, in which cases C is conjunctive,
probabilistic and granular, respectively. In the case of a
disjunctive constraint, isd is written for short as is, and C
plays the role of a graded possibility distribution which asso-
ciates with each point (or, equivalently, state-description)
the degree to which it can be assigned as a value to X. This
possibility distribution, then, is interpreted as the informa-
tion conveyed by p. Based on this interpretation, we can con-
struct a set of rules of inference which allow the possibility
distribution of a conclusion to be deduced from the possibility
distributions of the premises. In general, the process of
inference reduces to the solution of a nonlinear program. The
connection between the solution of a nonlinear program and the
traditional methods of deduction in first-order logic are
explained and illustrated by examples.
----------------------------------------------------------------
UPCOMING TALKS
October 29: Mardi Horowitz, Psychiatry, UCSF
November 5: Edward Zalta, CSLI, Stanford
November 12: Robert Wilensky, Computer Science, UCB
November 19: Richard Alterman, Computer Science, UCB
November 26: Eve Clark, Linguistics, Stanford
December 3: Bernard Baars, Langley Porter, UCSF
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
William Clancy of Stanford University will speak on ``Heuristic
Classification'' at the SESAME Colloquium on Monday, Oct. 21,
4:00pm, 2515 Tolman Hall.
Ruth Maki of North Dakota State University will speak on ``Meta-
comprehension: Knowing that you understand'' at the Cognitive
Psychology Colloquium, Friday, October 25, 4:00pm, Beach Room,
3105 Tolman Hall.
∂23-Oct-85 0749 @SU-CSLI.ARPA:RPERRAULT@SRI-AI.ARPA Talk by Bill Rounds
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 23 Oct 85 07:49:50 PDT
Received: from SRI-AI.ARPA by SU-CSLI.ARPA with TCP; Wed 23 Oct 85 07:45:02-PDT
Date: Wed 23 Oct 85 07:47:54-PDT
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Talk by Bill Rounds
To: friends@SU-CSLI.ARPA, aic-staff@SRI-AI.ARPA
cc: rperrault@SRI-AI.ARPA
SEMINAR ANNOUNCEMENT REMINDER
LOGIC AND LANGUAGE: CHARACTERIZING THE
COMPLEXITY OF LOGIC GRAMMARS
William C. Rounds
University of Michigan
4 p.m., Wednesday October 23, 1985
SRI International, Conference Room EJ228 (Bldg. E)
Modern artificial intelligence has seen the introduction of logic
as a tool for describing the syntax and semantics of natural language
grammars. In this talk I introduce two new notations for expressing
grammars, called CLFP and ILFP. These notations extend the first order
theory of concatenation and integer arithmetic with a Least Fixed
Point operator to accommodate recursive definitions of predicates. The
notations can be thought of as variants of definite clause grammars.
They are extremely easy to write and to understand. I prove that a
language is definable in CLFP if and only if it is recognizable by a
Turing machine in exponential time, and definable in ILFP if and only
if it is recognizable in polynomial time. As an application, I show
how to express head grammars in ILFP, thereby proving that head
languages are recognizable in polynomial time in a particularly easy
way.
-------
∂23-Oct-85 1633 @SU-CSLI.ARPA:admin@cogsci.Berkeley.EDU UCB Cognitive Science Seminar--Oct. 29, 1985
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 23 Oct 85 16:31:57 PDT
Received: from UCB-VAX by SU-CSLI.ARPA with TCP; Wed 23 Oct 85 16:30:24-PDT
Received: by UCB-VAX (5.29/5.13)
id AA20261; Wed, 23 Oct 85 13:51:07 PDT
Received: by cogsci (5.29/5.13)
id AA07435; Wed, 23 Oct 85 13:52:25 PDT
Date: Wed, 23 Oct 85 13:52:25 PDT
From: admin@cogsci.Berkeley.EDU (Cognitive Science Program)
Message-Id: <8510232052.AA07435@cogsci>
To: allmsgs@cogsci.Berkeley.EDU, cogsci-friends@cogsci.Berkeley.EDU
Subject: UCB Cognitive Science Seminar--Oct. 29, 1985
Cc: admin@cogsci.Berkeley.EDU
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar - IDS 237A
Tuesday, October 29, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``Person Schemata''
Mardi J. Horowitz M.D.
Professor of Psychiatry, U.C.S.F.
The speaker directs the recently formed Program on Cons-
cious and Unconscious Processes of the John and Catherine T.
MacArthur Foundation. Research on person schemata is one of
the core agendas of this program.
After a brief description of the program, the discussion
will focus on clinical phenomena as segmented by different
states of mind in a single individual. By examining the confi-
guration in each state of mind as it occurs over time, it may
be possible to infer what the self schemata and role relation-
ship models are that organize thoughts, feelings and action
into observed patterns. The theory that forms the basis for
such inferences includes the postulate that each person's
overall self organization may include a partially nested
hierarchy of multiple self-concepts. A frequent set of states
of mind in pathological grief reactions will provide a concrete
illustration of phenomena, methods of inference, and a theory
of person schemata.
---------------------------------------------------------------------
UPCOMING TALKS
November 5: Edward Zalta, CSLI, Stanford
November 12: Robert Wilensky, Computer Science, UCB
November 19: Richard Alterman, Computer Science, UCB
November 26: Eve Clark, Linguistics, Stanford
December 3: Bernard Baars, Langley Porter, UCSF
---------------------------------------------------------------------
ELSEWHERE ON CAMPUS
John Dalbey, SESAME student, will present ``The Totally Effort-
less Problem Solver'' at the SESAME Colloquium on Monday,
October 28, 4:00pm, 2515 Tolman Hall.
Tom Wickens, U.C.L.A., will speak on ``Response Interactions in
Visual Detections'' at the Cognitive Psychology Colloquium,
Friday, November 1, 4:00pm, Beach Room, 3105 Tolman Hall.
∂23-Oct-85 1733 EMMA@SU-CSLI.ARPA Newsletter October 24, No. 51
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 23 Oct 85 17:33:25 PDT
Date: Wed 23 Oct 85 16:56:02-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter October 24, No. 51
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 24, 1985 Stanford Vol. 2, No. 51
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, October 24, 1985
12 noon TINLunch
Ventura Hall ``A Problem for Actualism About Possible Worlds''
Conference Room by Alan McMichael
Discussion led by Edward Zalta
2:15 p.m. CSLI Seminar
Redwood Hall Discourse, Intention, and Action
Room G-19 Two talks given by Phil Cohen and Amichai Kronfeld
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, October 31, 1985
12 noon TINLunch
Ventura Hall The Formation of Adjectival Passives
Conference Room by B. Levin and M. Rappaport
Discussion led by Mark Gawron
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Redwood Hall Foundations of Document Preparation
Room G-19 David Levy, CSLI and Xerox PARC
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall The Structure of Social Facts
Room G-19 Prof. John Searle, Dept. of Philosophy, UC Berkeley
←←←←←←←←←←←←
CORRECTION
The coordinator for the Situation Theory and Situation Semantics
(STASS) project is Jon Barwise not David Israel as stated in last
week's newsletter.
!
Page 2 CSLI Newsletter October 24, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ABSTRACT OF NEXT WEEK'S TINLUNCH
The Formation of Adjectival Passives
B. Levin and M. Rappaport
This is Working Paper #2 in the MIT Lexicon Project, and though it
discusses some rather specific issues having to do with one (putative)
lexical rule of adjectival passive formation, it is an interesting
example of the lexicon at work in a GB-style theory, worked out in
unusual detail. It assumes no knowledge of the Lexicon Group's work
and only a minimal knowledge of GB.
Since Wasow 1977 it has been standard among generative grammarians
to assume two separate passivization rules, one for verbal passives,
another for adjectival passives. Levin and Rappaport argue against
the claim in Wasow 1980 that the second of these rules has a thematic
condition and propose an analysis of their own in which many of the
standardly-cited facts about adjectival passives fall out simply from
stipulating which arguments of a lexical item must be realized, and
assuming that such lexical facts are in the default case preserved in
the output of lexical rules. We thus have another case in which
thematic roles appear NOT to play the part they were claimed to play
in a specific morphological or syntactic process. Paradoxically,
although the paper is set in a framework which assumes specific
thematic roles, it presents an important negative result and casts
further doubt on the hypothesis that thematic roles play a significant
part in mediating the relation between syntax and lexical semantics.
--Mark Gawron
←←←←←←←←←←←←
NEXT WEEK'S CSLI SEMINAR
Foundations of Document Preparation
Document preparation, by which I mean the use of the computer to
prepare graphical presentations of verbal and pictorial information on
screens and on paper, is inherently a linguistic activity. This
statement is true in two senses: Documents, first of all, are
linguistic artifacts. But in addition, the use of the computer as a
marking tool is inherently linguistic: we *describe* to the computer
the documents we wish to create.
Current document preparation tools (the likes of TeX, Tedit, Emacs,
Scribe, etc.) are highly inadequate and unnecessarily restrictive.
This is because, I would claim, their designers have failed to take
explicit account of the linguistic nature of document preparation:
these tools have been built in advance of a theory of their subject
matter. In this talk, I will present an overview of research aimed at
developing a ``theory of marking'' to serve as the foundation for the
design of such tools. I will set forth the broad outlines of the
theory---one that lies at the intersection of a theory of production,
a theory of representation, and a theory of marks---and will
demonstrate that the issues of representation, reference, and action
with which the Center is concerned are central to this research. The
bulk of the talk will be devoted to illustrating the search for
founding concepts in the theory of marks---concepts such as figure,
ground, region, and blueprint. Such concepts are just as essential to
a future linguistics of written forms as to a foundation for document
preparation. --David Levy
!
Page 3 CSLI Newsletter October 24, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ENVIRONMENTS GROUP MEETING
The Rational Programming Environment - Summary
Wolfgang Polak, Kestrel
October 28, 1985, Ventura Trailer Classroom
In 1981 Rational commenced work on an Ada oriented software
development system. The goal was to create a commercial system
providing Lisp-style interactiveness and environment features for Ada.
The project encompassed a language oriented machine architecture,
specialized hardware, an integrated language based operating system
and programming environment, and project management support tools.
The original design used Ada's packages to create a hierarchy of
nested structures corresponding to conventional directory systems.
Permanent storage was provided by implementing persistent data objects
in the language. Programs and data are simply declarations within the
hierarchy of packages. Programs are only stored in internal
representation; semantic consistency (according to language semantics)
is maintained across the whole system. This organization allows
powerful program manipulation and query tools to be implemented
easily.
While very uniform, the use of packages as directories with the
associated semantic complexities proved cumbersome. In later versions
the directory structure was simplified and no longer subject to the
exact language rules.
The system is built around a powerful action mechanism. Any number
of directory/object manipulations can be associated with an action.
The action can later be committed, in which case all operations take
effect, or the action can be abandoned, in which case all operations
are undone.
The user interacts with the system via a multi-window editor. Each
window is of a particular type (e.g. text, program, status, etc.). The
system includes a general structure oriented editor which combines
structure operations with arbitrary text manipulation. Editor commands
are uniform across all windows; only the effect of structure
operations depends on the type of window.
Fast incremental compilation facilitates both interactive program
development and command execution.
----------
PIXELS AND PREDICATES
``Visual Programming Languages --
From Visual Assembler to Rocky's Boots''
Warren Robinett, with an assist by Scott Kim
CSLI trailers, Wednesday, October 30, 1:00 p.m.
A general view of the visual programming language problem is
presented, anchored by two concrete examples.
The first example is a visual assembly language, where patterns of
pixels are interpreted as low-level instructions which manipulate
patterns of pixels (and wherein one of the PnP themes is exemplified:
a very primitive `predicate made from pixels').
The second example is Rocky's Boots, a high-level visual
programming language based on the building circuits metaphor
(construed in some circles as an educational game).
!
Page 4 CSLI Newsletter October 24, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF ENVIRONMENTS GROUP MEETING
October 21, 1985
Terry Winograd described research on an environment for use by
people who are developing and modifying languages, and need to be able
to produce and manipulate texts in those languages during this
evolutionary phase. It is based on a uniform way of treating grammars
(based on a hierarchical phylum/operator structure with attributes),
so that structure editing, structured storage and other facilities
that are based on the language structure can be easily created and
developed.
He raised a number of issues that come up in trying to make the
environment general (for at least a broad class of existing and
envisioned languages), display-oriented (allowing dynamic changes of
structure and view), incremental (dealing well with continual small
updates), and distributed (multiple users cooperating in a
heterogeneous not-totally-reliable networked environment).
The current system is fragmentary and has not been integrated or
written up. Future talks by others in the group working on it will
address some of the more specific technical issues.
----------
CSLI SEMINAR SUMMARY
Ontology and Intensionality
Summary of CSLI Seminar on October 10
In this seminar, I outlined two recent developments in the theory
of abstract objects---one concerning ontology (the theory of times)
and one concerning intensionality (a solution to the Morning
Star/Evening Star puzzle). Moments of time were identified as
abstract objects, and truth at a time was defined in terms of the
encoding relation. Such definitions yielded the following non-trivial
consequences: times are maximal and consistent with respect to the
propositions true at them; there is a unique present time; a
proposition is always true iff it is true at all times, every
tense-theoretic consequence of a proposition true at a time is also
true at that time. In the second half of the seminar, we demonstrated
that once one uses structured entities as the denotations of
sentences, modal and tense contexts are not, in and of themselves,
intensional. Intensionality arises when definite descriptions appear
in such contexts, and by assigning definite descriptions a second
``intensional'' reading, on which they denote the abstract object
which encodes the properties they imply, we get a solution to the
substitutivity puzzles which preserves our intuitions about the
logical form of the sentences involved. --Edward N. Zalta
!
Page 5 CSLI Newsletter October 24, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SITUATED ENGINE COMPANY
The STASS project has initiated a working group on the relation
between situation theory and computation. The aim is two fold: learn
what needs to be added to situation theory to enable it to give
adequate accounts of various computational agents, and to learn how we
might be able to use computers in doing situation theory. These two
aims cause us to distinguish between sigma-machines and tau-machines.
Sigma-machines are machines that are the subject matter for a
situation-theoretic analysis. Tau-machines are machines built to help
do situation theory.
In the long run, we expect that sigma and tau machines will merged,
that our theory machines will also be our subject matter machines.
For now, though, we are operating on two fronts simultaneously. A
simple robot, Gullible, has been designed and implemented by Brian
Smith, Mike Dixon and Tayloe Stansbury. It moves around on a grid,
meeting people, picking up information (and misinformation) and
answering certain questions about other people's locations based on
what it has experienced on its travels. This is to serve as our first
sigma-machine. Four groups have been formed to come up with
semantical analysis of this robot using situation theory.
On the other front, Jon Barwise has been lecturing about situation
theory and its logic, to give a feeling for the basic theory, raising
questions about what it might be reasonable to ask a computer to do,
and coming up with some vague ideas about how one might get it to do
it.
The group meets every Tuesday at Xerox PARC, at 2 p.m., for about
two hours. --Jon Barwise
---------
POSTDOCTORAL FELLOWSHIPS
The Center for the Study of Language and Information (CSLI) at
Stanford University is currently accepting applications for a small
number of one year postdoctoral fellowships commencing September 1,
1986. The awards are intended for people who have received their
Ph.D. degrees since June 1983.
Postdoctoral fellows will participate in an integrated program of
basic research on situated language---language as used by agents
situated in the world to exchange, store, and process information,
including both natural and computer languages.
For more information about CSLI's research programs and details of
postdoctoral fellowship appointments, write to:
Dr. Elizabeth Macken, Assistant Director
Center for the Study of Language and Information
Ventura Hall
Stanford University
Stanford, California 94305
APPLICATION DEADLINE: FEBRUARY 15, 1986
-------
∂24-Oct-85 0819 EMMA@SU-CSLI.ARPA Today's CSLI seminar
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Oct 85 08:19:37 PDT
Date: Thu 24 Oct 85 08:20:04-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Today's CSLI seminar
To: friends@SU-CSLI.ARPA
Tel: 497-3479
The titles of the two talks to be given by Phil Cohen and Ami Kronfeld
in today's 2:15 seminar are
Speech Acts and Rationality
Phil Cohen
The Referential/Attributive Distinction
Ami Kronfeld
As usual, the abstracts can be found in last week's newsletter.
-------
∂30-Oct-85 0931 PHILOSOPHY@SU-CSLI.ARPA Josh Cohen
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 30 Oct 85 09:31:49 PST
Date: Wed 30 Oct 85 09:27:51-PST
From: Eve Wasmer <PHILOSOPHY@SU-CSLI.ARPA>
Subject: Josh Cohen
To: folks@SU-CSLI.ARPA
cc: friends@SU-CSLI.ARPA
The Philosophy Department is sponsoring a talk by Joshua Cohen from
M.I.T. The talk is titled "Structure, Choice, and Legitimacy in Locke's
Theory of Politics", and will be at 4:15 on Tuesday, November 5 in the
Philosophy Department Seminar Room, 90=92Q.
-------
∂30-Oct-85 1732 EMMA@SU-CSLI.ARPA Newsletter October 31, No. 52
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 30 Oct 85 17:32:20 PST
Date: Wed 30 Oct 85 16:47:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter October 31, No. 52
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 31, 1985 Stanford Vol. 2, No. 52
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, October 31, 1985
12 noon TINLunch
Ventura Hall The Formation of Adjectival Passives
Conference Room by B. Levin and M. Rappaport
Discussion led by Mark Gawron
2:15 p.m. CSLI Seminar
Redwood Hall Foundations of Document Preparation
Room G-19 David Levy, CSLI and Xerox PARC
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall The Structure of Social Facts
Room G-19 Prof. John Searle, Dept. of Philosophy, UC Berkeley
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, November 7, 1985
12 noon TINLunch
Ventura Hall James Gibson's Ecological Revolution in Psychology
Conference Room by E. S. Reed and R. K. Jones
Discussion led by Ivan Blair, CSLI
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Redwood Hall Phonology/Phonetics Seminar
Room G-19 Bill Poser and Paul Kiparsky
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Meaning, Information and Possibility
Room G-19 Lofti A. Zadeh, Computer Science Division
University of California at Berkeley
(Abstract on page 2)
←←←←←←←←←←←←
!
Page 2 CSLI Newsletter October 31, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ABSTRACT OF NEXT WEEK'S TINLUNCH
James Gibson's Ecological Revolution in Psychology
E. S. Reed and R. K. Jones
From about 1950 until his death, James Gibson constantly argued for
a view of and a research program for cognitive psychology that
differed radically from the mainstream position. Today the dominant
view in cognitive psychology is of cognitive agents as information
processors, a view to which the advent of the modern digital computer
has given a considerable boost. In the paper for this week's
Tinlunch, Reed and Jones characterize and contrast the Gibsonian (or
ecological) and information processing approaches.
My intention is to use this article to lay out for discussion the
basic principles of the ecological approach. The issues to be
considered include: the need for cognitive psychology to study the
organism in a real environment; the ecological program of studying
the environmental sources of information; and the rejection of any
appeal to mental representations in psychological explanation.
--Ivan Blair
←←←←←←←←←←←←
NEXT WEEK'S CSLI SEMINAR
Abstract of Phonology/Phonetics seminar
Post-lexical phonological rules are associated with a hierarchy of
nested domains, which are systematically related to phrase structure.
There is growing evidence in favor of recent proposals that this
hierarchy is universal. In this talk, we show that Japanese has tonal
rules associated with each of the postulated post-lexical domains, and
propose a cross-linguistic account for one of the prosodic domains,
the phonological phrase. --Bill Poser, Paul Kiparsky
←←←←←←←←←←←←
NEXT WEEK'S CSLI COLLOQUIUM
Meaning, Information and Possibility
L.A. Zadeh, Computer Science Division, University of
California, Berkeley, CA 94720}
Our approach to the connection between meaning and information is in
the spirit of the Carnap-Bar-Hillel theory of state descriptions.
However, our point of departure is the assumption that any proposition,
p, may be expressed as a generalized assignment statement of the form
X `isr' C, where X is a variable which is usually implicit in p, C is
an elastic constraint on the values which X can take in a universe of
discourse U, and the suffix r in the copula `isr' is a variable whose
values define the role of C in relation to X. The principal roles are
those in which r is d, in which case C is a disjunctive constraint; and
r is c, p and g, in which cases C is conjunctive, probabilistic, and
granular, respectively. In the case of a disjunctive constraint, `isd'
is written for short as `is', and C plays the role of a graded possibility
distribution which associates with each point (or, equivalently,
state-description) the degree to which it can be assigned as a value to X.
This possibility distribution, then, is interpreted as the information
conveyed by p. Based on this interpretation, we can construct a set of
rules of inference which allow the possibility distribution of a
conclusion to be deduced from the possibility distributions of the
premises. In general, the process of inference reduces to the solution
of a nonlinear program and the traditional methods of deduction in
first-order logic are explained and illustrated by examples.
!
Page 3 CSLI Newsletter October 31, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ENVIRONMENTS GROUP MEETING
NoteCards: An Environment for Authoring and Idea Structuring
Randy Trigg, Xerox PARC
Monday, November 4, noon, Ventura Seminar Room
NoteCards is part of an ongoing research project in the Intelligent
Systems Lab at Xerox PARC investigating "idea processing" tasks, such
as interpreting textual information, structuring ideas, formulating
arguments, and authoring complex documents. NoteCards is intended
primarily as an idea structuring tool, but it can also be used as a
fairly general database system for loosely structured information.
The basic object in NoteCards is an electronic note card containing
an idea-sized unit of text, graphics, images, or whatever. Different
kinds of note cards are defined in an inheritance hierarchy of note
card types (e.g., text cards, sketch cards, query cards, etc.). On
the screen, multiple cards can be simultaneously displayed, each one
in a separate window having an underlying editor appropriate to the
card type.
Individual note cards can be connected to other note cards by
arbitrarily typed links, forming networks of related cards. At
present, link types are simply labels attached to each link. It is up
to each user to utilize the link types to organize the note card
network.
NoteCards also includes a filing mechanism for building
hierarchical structures using system-defined card and link types.
There are also browser cards containing node-link diagrams (i.e.,
maps) of arbitrary pieces of the note card network and Sketch cards
for organizing information in the form of drawings, text and links
spatially.
All of the functionality in NoteCards is accessible through a set
of well-documented Lisp functions, allowing the user to create new
types of note cards, develop programs that monitor or process the note
card network, and/or integrate other programs into the NoteCards
environment.
----------
PIXELS AND PREDICATES
The Caricature Generator
Susan Brennan
CSLI trailers, 1:00 p.m., Wednesday, November 6, 1985
In an investigation of primitives for image generation,
manipulation and perception, a face is an interesting example of an
image. I will briefly survey psychological literature on face
perception which treats such issues as piecemeal vs. configurational
recognition strategies. I'll describe an application where a
caricature of a face serves as a form of semantic bandwidth
compression. Then, with additional inspiration from art, computer
graphics and machine vision, I'll develop a theory of caricature.
Conditions permitting, there will be a demonstration of a program
which generates caricatures of faces from line drawings and provides
the user with a single exaggeration control with which the distortion
in the image (relative to a norm) can be turned up or down. I will
also show a videotape and refer to the work that Gill Rhodes and I
have been doing recently on perception of these caricatures.
!
Page 4 CSLI Newsletter October 31, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
INTERACTIONS OF MORPHOLOGY AND SYNTAX
Case-Assignment by Nominals in Japanese
Masayo Iida
Thursday, October 31, 10:00 a.m., Ventura Conference Room
In this paper I will discuss certain peculiar properties of a class
of Japanese deverbal nominals, which show verb-like properties in
certain environments: specifically, they assign verbal case and can be
modified by adverbs (`verbal case' includes nominative, accusative and
dative, i.e., cases normally assigned by a verb). These
case-assignment phenomena pose a problem for current syntactic
theories, which assume that verbs alone assign such cases, while nouns
do not. Now I have observed that a deverbal nominal assigns verbal
case only when it is concatenated with a suffix bearing temporal
information, which might be encoded with the feature [+aspect]. The
nominal assigns case when the following two conditions are satisfied:
(i) the nominal has a predicate-argument structure, and (ii) it is
concatenated with a suffix which bears an aspectual feature. I will
propose that (syntactic) category membership is not sufficient for
determining properties of case-assignment, adverb distribution, etc.,
and suggest that the factors (i) and (ii) are perhaps more relevant.
--Masayo Iida
----------
LOGIC SEMINAR
``Truth, the Liar, and Circular Propositions''
John Etchemendy and Jon Barwise, Philosophy Dept. Stanford
Friday, Nov. 1, noon, 383N (Math. Dept. Faculty Lounge)
Unlike standard treatments of the Liar, we take seriously the
intuition that truth is, first and foremost, a property of
propositions (not of sentences), and the intuition that propositions
(unlike sentences) can be genuinely circular or nonwellfounded. To
model the various semantic mechanisms that give rise to the paradox,
we work within Peter Aczel's set theory, ZFC/AFA, a theory
equiconsistent with ZFC but with Foundation replaced by a strong
anti-foundation axiom. We give two separate models; one based on an
Austinian conception of propositions (according to which a proposition
claims that an actual or ``historical'' situation is of a specified
type), and one based on a Russellian conception (according to which
propositions are complexes of objects and relations). The models show
that the moral of the Liar depends in a crucial way on which
conception is adopted.
!
Page 5 CSLI Newsletter October 31, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF ENVIRONMENTS GROUP MEETING
October 28, 1985
Wolfgang Pollak of Kestrel spoke on the ADA programming environment
he helped develop at Rational Systems. By combining dedicated special
hardware (high-level-language oriented) with a monolingual operating
system / command language / environment (written entirely in ADA and
supported with specialized microcode and memory management), it was
possible to design the environment in a unified way using the language
itself as the structure. All storage is handled by making it possible
for arbitrary data objects in the language to be declared
``persistent,'' rather than having a separate concept of files. These
persistent objects are the locus of object management (access control,
versions, etc.). The environment is editor-based, with the commands
extended by using arbitrary function calls in the language. It
incorporates a concept of unitary action, which allows the user to
make a sequence of changes and then either commit (in which case they
all take effect at once) or abandon (in which case the state is as if
none of them ever happened). Wolf described a number of techniques
for making the environment incremental---for keeping the feel that
each small change takes effect as it is made, rather than waiting for
some large-scale redisplay or compile. Discussion emphasized the way
that a number of these issues and techniques could apply to other
environments.
-------
∂01-Nov-85 0631 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 1 Nov 85 06:31:48 PST
Received: from ucb-vax.berkeley.edu by SU-CSLI.ARPA with TCP; Fri 1 Nov 85 06:31:43-PST
Received: by UCB-VAX (5.29/5.14)
id AA01165; Wed, 30 Oct 85 12:21:37 PST
Received: by cogsci (5.31/5.13)
id AA01002; Wed, 30 Oct 85 12:05:18 PST
Date: Wed, 30 Oct 85 12:05:18 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8510302005.AA01002@cogsci>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Nov. 5
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar - IDS 237A
Tuesday, November 5, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``On the Intentional Contents of Mental States About Fictions''
Edward Zalta
Postdoctoral Fellow in Philosophy at C.S.L.I.
Acting Asst. Professor of Philosophy, Stanford University
In this seminar, I present a theory of intentional objects
some of which seem to serve nicely as the contents of mental
states about stories and dreams (no matter how bizarre they may
be). The theory yields a way of understanding utterances about
particular fictional characters and particular dream objects.
For the purposes of the talk, it will make no difference
whether one construes the theory ontologically as a theory
about what the world has to be like or has to have in it in
order for us to characterize properly such mental states, or
whether one construes the theory as just a canonical notation
for specifying the contents of (or mental representations
involved in) such states. Either way, one is left with a
domain over which operations may be defined to explain how we
get from one state to the next, and so the theory should be of
interest to cognitive scientists. The philosophical basis of
my work lies in a theoretical compromise between the views of
Edmund Husserl and Alexius Meinong, and it is consistent with
classical logic.
---------------------------------------------------------------
UPCOMING TALKS
November 12:Robert Wilensky, Computer Science, UCB
November 19:Richard Alterman, Computer Science, UCB
November 26:Eve Clark, Linguistics, Stanford
December 3:Bernard Baars, Langley Porter, UCSF
---------------------------------------------------------------
ELSEWHERE ON CAMPUS
Steven Pulos, UCB, will discuss ``Children's conceptions of
computers'' at the SESAME Colloquium on Monday, November 4,
4:00pm, 2515 Tolman Hall.
John Kruschki, UCB, will speak on ``Depth and the Configural
Orientation Effect'' at the Cognitive Psychology Colloquium,
Friday, November 8, 4:00pm, Beach Room, 3105 Tolman Hall.
∂04-Nov-85 2048 @SU-CSLI.ARPA:dirk@SU-PSYCH Psychology Department Friday Seminar.
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Nov 85 20:47:53 PST
Received: from SU-PSYCH by SU-CSLI.ARPA with TCP; Mon 4 Nov 85 20:45:31-PST
From: dirk@SU-PSYCH (Dirk Ruiz)
Received: from by SU-PSYCH with TCP; Mon, 4 Nov 85 20:45:08 pst
Date: 04 Nov 85 20:44:59 PST (Mon)
To: friends@su-csli.ARPA
Subject: Psychology Department Friday Seminar.
Our speaker this week is Gyorgy Gergely. Time and place are 3:15, Friday
(November 8, 1985) in room 100 of Jordan Hall. Title and abstract follow:
------------------------------------------------------------------------
Discourse Integrational Processes in Sentence Comprehension
Gyorgy Gergely
Classical models of sentence processing (e.g., Fodor, Bever & Garrett,
1974) developed in the universalist framework of Chomskian generative
grammar are examined critically from a functionalist comparative
perspective. It is argued that earlier interpretations of on-line measures
of clausal processing (e.g., of the local increase of processing load
before the clause-boundary) lose their plausibility when considering a
class of languages that are typologically radically different from English.
Several experiments will be reported that examine clausal processing in
Hungarian, a non-Indo-European language, which, unlike English, has a)
`free' word order, b) marks underlying structural roles of NPs locally
unambiguously by case-marker suffixes, and c) encodes the discourse
functions of surface constituents syntactically.
The experiments demonstrate the existence of several kinds of discourse
integrational processes (such as `topic foregrounding' or focus-based
`inferential priming') which determine on-line measures of clausal
processing. The results suggest that the local increase in processing load
at the end of the clause serves, to a large extent, across-clause discourse
integrational functions rather than within-clause functions of assigning
underlying structural representations, as previously supposed. It is shown
that, during on-line processing, discourse segmentational cues, identifying
the informational focus (i.e., `new' information) and topic (i.e., `given'
information) of the clause, play a crucial role in directly mapping surface
sequences onto discourse representational structures.
------------------------------------------------------------------------
------- End of Forwarded Message
∂07-Nov-85 0646 @SU-CSLI.ARPA:emma@csli-whitehead CSLI Newsletter
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 7 Nov 85 06:46:07 PST
Received: from csli-whitehead ([36.9.0.8].#Internet) by SU-CSLI.ARPA with TCP; Thu 7 Nov 85 06:43:51-PST
Received: by csli-whitehead with TCP; Wed, 6 Nov 85 16:14:10 pst
Date: Wed 6 Nov 85 16:14:07-PST
From: Emma Pease <EMMA@CSLI-WHITEHEAD.ARPA>
Subject: CSLI Newsletter
To: csli@SRI-AI.ARPA, sugai@XEROX.ARPA, friends@SU-CSLI.ARPA
Message-Id: <VAX-MM(161)+TOPSLIB(113) 6-Nov-85 16:14:07.CSLI-WHITEHEAD.ARPA>
The CSLI newsletter will not appear until tomorrow morning at the
earliest because CSLI (the computer) has crashed.
Please pass the word onto other people.
Emma Pease
Newsletter Editor
-------
∂07-Nov-85 0946 EMMA@SU-CSLI.ARPA re: newsletter
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 7 Nov 85 09:46:16 PST
Date: Thu 7 Nov 85 09:29:00-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: re: newsletter
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Because of a major crash by the CSLI computer, I was unable to get the
newsletter out by Wednesday evening. I hope to print it sometime
today but probably not before lunch hence the schedule of today's
events below.
--Emma Pease
CSLI ACTIVITIES FOR *THIS* THURSDAY, November 7, 1985
12 noon TINLunch
Ventura Hall James Gibson's Ecological Revolution in Psychology
Conference Room by E. S. Reed and R. K. Jones
Discussion led by Ivan Blair, CSLI
2:15 p.m. CSLI Seminar
Redwood Hall Phonology/Phonetics Seminar
Room G-19 Bill Poser and Paul Kiparsky
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Meaning, Information and Possibility
Room G-19 Lofti A. Zadeh, Computer Science Division
University of California at Berkeley
-------
∂07-Nov-85 1726 EMMA@SU-CSLI.ARPA Newsletter November 7, No. 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 7 Nov 85 17:26:34 PST
Date: Thu 7 Nov 85 16:41:45-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter November 7, No. 1
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 7, 1985 Stanford Vol. 3, No. 1
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, November 7, 1985
12 noon TINLunch
Ventura Hall James Gibson's Ecological Revolution in Psychology
Conference Room by E. S. Reed and R. K. Jones
Discussion led by Ivan Blair, CSLI
2:15 p.m. CSLI Seminar
Redwood Hall Phonology/Phonetics Seminar
Room G-19 Bill Poser and Paul Kiparsky
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Meaning, Information and Possibility
Room G-19 Lotfi A. Zadeh, Computer Science Division
University of California at Berkeley
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, November 14, 1985
12 noon TINLunch
Ventura Hall Machines and the Mental
Conference Room by Fred Dretske
Discussion led by Jon Barwise
(Abstract next week)
2:15 p.m. CSLI Seminar
Redwood Hall To be announced
Room G-19
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Partial Truth Conditions and Their Logics
Room G-19 Hans Kamp, University of Texas
(Abstract on page 2)
←←←←←←←←←←←←
!
Page 2 CSLI Newsletter November 7, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S COLLOQUIUM
Partial Truth Definitions and their Logics
Hans Kamp
Until recently truth definitions for formal and natural languages
were, with some few exceptions, total (in the sense of specifying
w.r.t. any model a truth value for each sentence of the language under
consideration). But during the past decade partial truth definitions
have become increasingly common both within symbolic logic and in
formal semantics.
The motives for adopting partial truth definitions vary considerably.
I will focus on three issues that have led to the formulation of such
definitions: i) vagueness; ii) the semantic paradoxes; and iii)
verification by partial information structures (a concept that has
inspired both situation semantics and recent work on the semantics of
data structures). I will discuss and compare some of the partial
semantics that have been developed in attempts to come to terms with
these issues, looking in particular at the question what logics are
generated by the resulting semantic theories. I will argue that the
relation between semantics and logic is less straightforward when the
truth definition is partial than when it is total, and consequently that
the notion of logical validity becomes much more delicate and equivocal
once total semantics is abandoned in favor of some partial alternative.
----------
PIXELS AND PREDICATES
Automatic Generation of Graphical Presentations
Jock Mackinlay
CSLI trailers, 1:00 p.m., Wednesday, November 13, 1985
The goal of my thesis research is to develop an application-
independent presentation tool that automatically generates appropriate
graphical presentations of information such as charts, maps, and
network diagrams. A presentation tool can be used to build effective
user interfaces because it exploits the structure of the information
and the capabilities of the output device to generate appropriate
presentations. Application designers need not be graphical
presentation experts to ensure that their user interfaces use
graphical languages correctly and effectively.
The research has two parts: a formal analysis of graphical
languages for presentation and a prototype presentation tool based on
the formal analysis.
The formal analysis uses syntactic and semantic descriptions of
graphical languages to develop criteria for evaluating graphical
presentations. There are two major classes of criteria: expressiveness
and effectiveness. The expressiveness criteria are theorems that identify
when a set of facts is or is not expressible in a language. The
effectiveness criteria are conjectures (rather than theorems) about
the relative difficulty of the perceptual tasks associated with the
interpretation of graphical languages. Sufficiently expressive languages
are ordered by the difficulty of their associated perceptual tasks.
The prototype presentation tool, called APT (A Presentation Tool),
uses the criteria developed by formal analysis to search a space of
graphical languages for an appropriate presentation. A novel feature
of APT is its ability to generate its search space by composing
sophisticated designs from a small set of fundamental graphical
languages. The design portion of APT is a logic program based on the
MRS representation system.
!
Page 3 CSLI Newsletter November 7, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ENVIRONMENTS GROUP MEETING
A Very-High-Level-Language Programming Environment
Steve Westfold, Kestrel
Monday, November 11, noon, Ventura Seminar Room
Kestrel Institute is doing research on a programming system based
on a very-high-level specification/programming language. The language
is based on logic and set theory. It is a wide-spectrum language
encompassing both an inference model of computation and a state-change
model. Compilation is done by transformation and step-wise refinement
into the target language (initially Lisp). A central part of the
system is the ability to define new language constructs and domain
languages, and facilities for manipulating and transforming them. Most
of the system is written in the system language.
The underlying structure of the environment is a database of
objects, sets, sequences and mappings. There is an object hierarchy
which is used primarily for factoring applicability of mappings.
Language statements (parse structures and annotations) are represented
in the database. We identify the representation of statements with
the meta-level description of those statements. Thus, meta-level
inference on descriptions results in statement manipulation such as
transformation. Usually the programmer need not be aware of the
representation because of a quotation construct that is analogous to
lisp backquote, but is more powerful and can be used for testing and
decomposing statements as well as constructing them. Among the ways
that the user may view portions of the database are as prettyprinted
language statements, as objects with properties, and as graphs of
boxes and arrows. The database may be edited using any of these
views.
The system enforces constraints stated as implications (universally
quantified) with an indication of the triggers for enforcement and of
the entities to change to make the constraint true.
We have a context tree mechanism for keeping different states of
the database. It is somewhat smart in that it does not save undo
information for database changes that are ``internal'' to the current
state. It would have wider application if it were able to work on
subsets of the database rather than the database as a whole.
We have recently built a prototype for a project management system.
It deals with system components and their versions and bugs, and tasks
and schedules. This work is at a fairly early stage and not my area
so I wouldn't want to talk much about the details of it, although
someone else at Kestrel might. However, it does provide good examples
of the utility of the language-defining and constraint capabilities in
a domain other than program synthesis.
(Note: Now that I have been getting more comprehensive abstracts, I
won't bother to write extended summaries. --Terry Winograd)
!
Page 4 CSLI Newsletter November 7, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF LAST WEEK'S SEMINAR
Foundations of Document Preparation
David Levy, CSLI and Xerox PARC
David Levy presented an overview of research aimed at providing a
theoretical foundation for document preparation by developing a
``theory of marking.'' In early marking technology, he suggested, no
such theory was needed because people were able to rely on their
``tacit'' knowledge of how marks are placed on surfaces. But this is
no longer sufficient, because the marks made and the documents
produced in the new computer technology are determined by the
``descriptions'' we offer the computer; hence the classes of documents
that can be produced are bounded by the ``languages'' available. The
theory of marking is intended to make explicit the necessary
distinctions out of which rational, and, if possible, complete
languages can be designed.
In the first section of the talk, Levy offered several examples
from the Interlisp-D environment suggesting that Tedit (the WYSIGWYG
text editor), Sketch (the interactive drawing editor), and the window
system were all essentially after the same thing, namely the placement
of figures on two-dimensional surfaces, but because this fact had not
been clearly perceived and because there was no theoretical machinery
to support such an analysis, each of the systems was in its own way
limited. Important generalizations had been missed.
The bulk of the talk was devoted to outlining the theory of
marking, which he conjectured would lie at the intersection of a
theory of marks, a theory of production, and a theory of
representation. The theory of marks would provide the concepts needed
to describe the relationship between static figures placed on two
dimensional surfaces, while the theory of production, by specifying
the relationship between the activity of producing an artifact and the
artifact so produced, would introduce the notion of activity necessary
to transform a theory of ``marks'' into a theory of ``marking.'' A
theory of production, he noted, would be needed for a theory of
language use as well as a theory of marking. Little was said about a
theory of representation except to suggest that it was a topic of real
concern to many others at the Center.
----------
SUMMARY OF TALK
The Semantics of Types and Terms and Their Equivalences
for Various Lambda-Calculi
Prof. Giuseppe Longo, University of Pisa
November 6, 1985, Ventura Hall
Lambda calculus provides the core of functional programming as it
is based on the key notions of functional abstraction and application.
The first part of the lecture presented an introductory account of the
main type disciplines and their semantics. First-order polymorphism
and its motivations were also surveyed. In the second part, the
semantic equivalence of typed terms was discussed. The relation
between types and terms gives us an insight into second-order
polymorphism (parametric types) and its semantics.
(Professor Longo was visiting CSLI from November 4 to November 7.)
!
Page 5 CSLI Newsletter November 7, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEW CSLI REPORTS
Report No. CSLI-85-37, ``On the Coherence and Structure of
Discourse'' by Jerry R. Hobbs, and Report No. CSLI-85-38, ``The
Coherence of Incoherent Discourse'' by Jerry R. Hobbs and Michael
Agar, have just been published. These reports may be obtained by
writing to David Brown, CSLI, Ventura Hall, Stanford, CA 94305 or
Brown@SU-CSLI.
-------
∂08-Nov-85 1647 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 12 (R. Wilensky, UCB)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 8 Nov 85 16:47:06 PST
Received: from ucbvax.berkeley.edu by SU-CSLI.ARPA with TCP; Thu 7 Nov 85 19:32:59-PST
Received: by ucbvax.berkeley.edu (5.31/1.2)
id AA16621; Thu, 7 Nov 85 17:27:35 PST
Received: by cogsci (5.31/5.13)
id AA04733; Thu, 7 Nov 85 17:29:20 PST
Date: Thu, 7 Nov 85 17:29:20 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8511080129.AA04733@cogsci>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Nov. 12 (R. Wilensky, UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, November 12, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``Knowledge Representation and a Theory of Meaning''
Robert Wilensky
Computer Science Division, U.C.B.
Knowledge representation is central to most Artificial Intelli-
gence endeavors. However, most knowledge representation
schemes are incomplete in a number of ways. In particular,
their coverage is inadequate, and they do not capture signifi-
cant aspects of meanings. Many do not even adhere to basic
criteria of well-formedness for a meaning representation.
KODIAK is a theory of knowledge representation developed at
Berkeley that attempts to address some of these deficiencies.
KODIAK incorporates representational ideas that have emerged
from different schools of thought, in particular from work in
semantic networks, frames, Conceptual Dependency, and frame
semantics. In particular, KODIAK eliminates the frame/slot
distinction found in frame-based languages (alternatively,
case/slot distinction found in semantic network-based systems).
In its place KOKIAK introduces a new notion called the
absolute/aspectual distinction. In addition, the theory sup-
ports ``non-literal'' representations, namely, those motivated
by metaphoric and metonymic considerations. Using these dev-
ices, the theory allows for the representation of some ideas
that in the past have only been represented procedurally,
informally, or not at all.
KODIAK is being used to represent both linguistic and concep-
tual structures. When applied to the representation of
linguistic knowledge, a new framework for talking about meaning
emerges. Five aspects of meaning have been identified. These
appear to be useful in describing processing theories of
natural language use.
----------------------------------------------------------------
UPCOMING TALKS
November 19: Richard Alterman, Computer Science, UCB
November 26: Eve Clark, Linguistics, Stanford
December 3: Bernard Baars, Langley Porter, UCSF
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
Peter Pirolli will speak on ``Intelligent Tutoring Systems'' at
the SESAME Colloquium on November 18, 2515 Tolman Hall, 4:00pm.
∂12-Nov-85 0835 EMMA@SU-CSLI.ARPA TINLunch
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Nov 85 08:34:59 PST
Date: Tue 12 Nov 85 08:33:57-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: TINLunch
To: friends@SU-CSLI.ARPA
Tel: 497-3479
For complicated reasons, Jon Barwise's TINLunch has been moved from
December to this Thursday. His abstract follows:
``Machines and the Mental''
Fred Dretske
The paper argues that current computers do not exhibit anything that
deserves to be called rational cognitive activity. Dretske even
claims that they can't add! He then goes on to discuss how one might
build something that deserves to be called a rational machine.
This short, well-written paper is Dretske's Presidential Address to
the APA. Read it can come prepared for a lively session.
-------
∂13-Nov-85 1758 EMMA@SU-CSLI.ARPA Newsletter November 14, No. 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Nov 85 17:57:58 PST
Date: Wed 13 Nov 85 17:05:26-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter November 14, No. 2
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 14, 1985 Stanford Vol. 3, No. 2
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, November 14, 1985
12 noon TINLunch
Ventura Hall Machines and the Mental
Conference Room by Fred Dretske
Discussion led by Jon Barwise (Barwise@su-csli.arpa)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Redwood Hall A Morphological Recognizer with Syntactic and
Room G-19 Phonological Rules
John Bear (Bear@sri-ai.arpa)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Partial Truth Conditions and Their Logics
Room G-19 Hans Kamp, University of Texas
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *NEXT* THURSDAY, November 21, 1985
12 noon TINLunch
Ventura Hall Parsing as Deduction?
Conference Room by Fernando Pereira and David Warren
Discussion led by Mark Johnson (Johnson@su-csli.arpa)
(Abstract will appear next week)
2:15 p.m. CSLI Seminar
Redwood Hall Interactive Modularity
Room G-19 Ron Kaplan, Xerox PARC (Kaplan.pa@xerox.arpa)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall An Introduction to Information-based Complexity
Room G-19 J. F. Traub, Computer Science Department, Columbia
(Abstract on page 3)
←←←←←←←←←←←←
!
Page 2 CSLI Newsletter November 14, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S TINLUNCH
Machines and the Mental
by Fred Dretske
The paper argues that current computers do not exhibit anything
that deserves to be called rational cognitive activity. Dretske even
claims that they can't add! He then goes on to discuss how one might
build something that deserves to be called a rational machine.
This short, well-written paper is Dretske's Presidential Address to
the APA. Read it and come prepared for a lively session. --Jon Barwise
←←←←←←←←←←←←
THIS WEEK'S CSLI SEMINAR
A Morphological Recognizer
with Syntactic and Phonological Rules
In many natural language processing systems currently in use the
morphological phenomena are handled by programs which do not interpret
any sort of rules, but rather contain references to particular
morphemes, graphemes, and grammatical categories. Recently
Koskenniemi, Karttunen, Kaplan and Kay have showed how to build
morphological analyzers in which the descriptions of the phonological
(or orthographic) and syntactic phenomena are separable from the code.
A system will be described which is based on the work of the people
mentioned above. There are two main differences between the system to
be described here and other existing systems of its kind. Firstly,
the spelling rules are not translated into finite state transducers,
but are interpreted directly, thereby yielding a system more amenable
to grammar development than one in which considerable time is
necessary to compile the rules into transducers. Secondly, the
syntactic component has more flexibility than other current systems.
Instead of encoding the syntax entirely in the lexicon by stipulating
about each morpheme what category it is and what category may come
next, this system contains a file of patr-type rules with the power to
unify dags containing disjunctions. --John Bear
----------
NEXT WEEK'S SEMINAR
Interactive Modularity
Ron Kaplan, Xerox PARC
Comprehensible scientific explanations for most complex natural
phenomena are modular in character. Phenomena are explained in terms
of the operation of separate and independent components, with
relatively minor interactions. Modular accounts of complex cognitive
phenomena, such as language processing, have also been proposed, with
distinctions between phonological, syntactic, semantic, and pragmatic
modules, for example, and with distinctions among various rules within
modules. But these modular accounts seem incompatible with the
commonplace observations of substantial interactions across component
boundaries: semantic and pragmatic factors, for instance, can be shown
to operate even before the first couple of phonemes in an utterance
have been identified. In this talk I consider several methods of
reconciling modular descriptions in service of scientific explanation
with the apparent interactivity of on-line behavior. Run-time methods
utilize interpreters that allow on-line interleaving of operations
from different modules, perhaps including additional ``scheduling''
!
Page 3 CSLI Newsletter November 14, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
components for controlling the cross-module flow of information. But
depending on their mathematical properties, modular specifications may
also be transformed by off-line, compile-time operations into new
specifications that directly represent all possible cross-module
interactions. Such compilation techniques allow for run-time
elimination of module boundaries and of intermediate levels of
representation.
----------
NEXT WEEK'S COLLOQUIUM
An Introduction to Information-based Complexity
J. F. Traub
Computer Science Department, Columbia University
In information-based complexity ``information'' is, informally,
what we know about a problem which we wish to solve.
The goal of information-based complexity is to create a general
theory about problems with partial and contaminated information and to
apply the results to solving specific problems in varied disciplines.
Problems with partial and contaminated information occur in areas such
as vision, medical imaging, prediction, geophysical exploration,
signal processing, control, and scientific and engineering
calculation.
For problems with partial and contaminated information, very general
results can be obtained at the ``information level.'' Among the
general results to be discussed is the power of parallel
(non-adaptive) information and the application of such information to
the solution of problems on distributed systems.
The methodology and results of information-based complexity will be
contrasted with the study of NP-complete problems where the
information is assumed to be complete, exact, and free.
----------
PIXELS AND PREDICATES
Setting Tables and Illustrations with Style
Rick Beach, Xerox PARC, (Beach.pa@xerox.arpa)
CSLI trailers, 1:00 p.m., Wednesday, November 20, 1985
Two difficult examples of incorporating complex information within
electronic documents are illustrations and tables. The notion of
style, a way of maintaining consistency, helps manage the complexities
of formatting both tables and illustrations. The concept of graphical
style extends document style to illustrations. Observing that
graphical style does not adequately deal with the layout of
information leads to the study of formatting tabular material. A grid
system for describing the arrangement of information in a table, and a
constraint solver for determining the layout of the table are key
components of this research. These ideas appear to extend to
formatting other complex material, including mathematical typesetting
and page layout. Several typographic issues for illustrations and
tables will be highlighted during the talk.
!
Page 4 CSLI Newsletter November 14, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
LEXICAL PROJECT MEETING
Lori Levin, University of Pittsburgh
Monday, November 18, 10 a.m., Ventura Conference Room
I will describe a theory of relation changing rules which is
compatible with LFG. The theory accounts for the interaction between
semantically conditioned and syntactically productive relation
changing rules, ability of relation changing rules to distinguish
between subjects of unaccusative verbs and subjects of unergative
verbs, and the apparent directionality of object-to-subject relation
changes. In order to handle these properties of relation changing
rules, I introduce a new mechanism which I call Argument
Classification which mediates between thematic roles and grammatical
functions in the lexicon. I will illustrate the formulation of
various relation changing rules in English and Dutch using argument
classification.
(This talk is part of the Lexical Project meetings but open to
everybody.)
----------
ENVIRONMENTS GROUP MEETING
An Environment for VLSI Design
Mike Spreitzer, Xerox PARC, (Spreitzer@xerox.arpa)
Monday, November 18, noon, Ventura Seminar Room
We in the PARC CSL VLSI CAD group are working on making our tools
more integrated than they have been. We are defining an in-memory
data structure for representing the basic structure of a VLSI design.
Other information is hung on this "skeleton" via property lists.
Various tools communicate with each other through this decorated
structure. We think this will make it easier for the tools to
cooperate more closely than in the past.
----------
COMMON SENSE AND NON-MONOTONIC REASONING SEMINAR
Some Results on Autoepistemic Logic
Wiktor Marek, University of Kentucky
2:00 PM, Wednesday, November 20, MJH 252
We discuss some properties of so-called stable theories in
autoepistemic logic (cf. Moore, AIJ 25 (1985)), that is, sets of
beliefs of a fully rational agent. We show an operator constructing
these theories out of their objective parts and investigate the
complexity of the construction. We attempt to extend Moore's approach
to the case of predicate logic. Finally, we discuss the notion of
inessential modal extension of a first order theory.
----------
NEW CSLI REPORT
Report No. CSLI-85-36, ``Limits of Correctness in Computers'' by
Brian Cantwell Smith, has just been published. This report may be
obtained by writing to David Brown, CSLI, Ventura Hall, Stanford, CA
94305 or Brown@SU-CSLI.
-------
∂14-Nov-85 0830 EMMA@SU-CSLI.ARPA Newsletter addition
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 14 Nov 85 08:30:05 PST
Date: Thu 14 Nov 85 08:25:39-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter addition
To: friends@SU-CSLI.ARPA
cc: newsletter@SU-CSLI.ARPA
Tel: 497-3479
The following was omitted from the newsletter.
LOGIC SEMINAR
Truth, the Liar, and Circular Propositions, Cont.
Jon Barwise and John Etchemendy
Friday, November 15, 1985
Last time John Etchemendy gave an informal introduction to Peter
Aczel's set theory ZF/AFA, and showed how to use it to model the
Austinian conception of proposition. He then discussed how the Liar,
the truth-teller, and other paradoxes and puzzles come out in this
model. This week I will review ZF/AFA very briefly and then use it to
model the Russellian conception of proposition, and discuss how the
same puzzles come out in this model. --Jon Barwise
Noon, Math Faculty Lounge, building 380.
-------
-------
∂14-Nov-85 2023 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 19 (R. Alterman, UCB)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 14 Nov 85 20:23:20 PST
Received: from ucbvax.berkeley.edu by SU-CSLI.ARPA with TCP; Thu 14 Nov 85 19:58:52-PST
Received: by ucbvax.berkeley.edu (5.31/1.2)
id AA04624; Thu, 14 Nov 85 16:55:28 PST
Received: by cogsci (5.31/5.13)
id AA05204; Thu, 14 Nov 85 16:57:53 PST
Date: Thu, 14 Nov 85 16:57:53 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8511150057.AA05204@cogsci>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Nov. 19 (R. Alterman, UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
$9 ←λF←λa←λl←λl ←λ1←λ9←λ8←λ5
$9 Cognitive Science Seminar - IDS 237A
$9 Tuesday, November 19, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``Adaptive Planning is Commonsense Planning''
Richard Alterman
Computer Science Division, U.C.B.
A characteristic of commonsense planning is that it is
knowledge intensive. For most mundane sorts of situations
human planners have access to, and are capable of exploiting,
large quantities of knowledge. Commonsense planners re-use old
plans under their normal circumstances. Moreover, commonsense
planners are capable of refitting old plans to novel cir-
cumstances. A commonsense planner can plan about a wide range
of phenomena, not so much because his/her depth of knowledge is
consistent throughout that range, but because s/he can re-fit
old plans to novel contexts.
This talk is about an approach to commonsense planning
called ←λa←λd←λa←λp←λt←λi←λv←λe ←λp←λl←λa←λn←λn←λi←λn←λg. An adaptive planner plans by exploit-
ing planning knowledge in a manner that delays the reduction of
commonsense planning to problem-solving. Key elements in the
theory of adaptive planning are its treatment of background
knowledge and the introduction of a notion of planning by
situation matching. This talk will describe adaptive planning
as it applies to a number of commonsense planning situations,
including: riding the NYC subway, trading books, transferring
planes at JFK airport, and driving a rented car.
----------------------------------------------------------------
UPCOMING TALKS
$9 November 26:Eve Clark, Linguistics, Stanford
December 3:Bernard Baars, Langley Porter, UCSF
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
Jeff Shrager of Xerox PARC will speak on ``Instructionless
Learning'' at the SESAME Colloquium on November 18, 2515 Tolman
Hall, 4:00pm.
The Physics Department is sponsoring a talk by J. J. Hopfield
of CALTECH on Wednesday, November 20 at 4:30pm in 1 Le Conte.
Dr. Hopfield will be speaking on ``Neural Networks.'' A tea
precedes the talk.
∂20-Nov-85 1814 EMMA@SU-CSLI.ARPA Newsletter November 21, No. 3
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 20 Nov 85 18:14:43 PST
Date: Wed 20 Nov 85 17:05:29-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter November 21, No. 3
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 21, 1985 Stanford Vol. 3, No. 3
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, November 21, 1985
12 noon TINLunch
Ventura Hall Parsing as Deduction?
Conference Room by Fernando Pereira and David Warren
Discussion led by Mark Johnson (Johnson@su-csli.arpa)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Redwood Hall Interactive Modularity
Room G-19 Ron Kaplan, Xerox PARC (Kaplan.pa@xerox.arpa)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall An Introduction to Information-based Complexity
Room G-19 J. F. Traub, Computer Science Department, Columbia
←←←←←←←←←←←←
ANNOUNCEMENT
Please note that there will be no activities and no newsletter on
Thursday, November 28, because of the Thanksgiving Holiday. Thursday
activities will resume on December 5.
!
Page 2 CSLI Newsletter November 21, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S TINLUNCH
Parsing as Deduction?
by Fernando Pereira and David Warren
Pereira and Warren's paper is exceptional in both its scope and its
content. It begins by proposing a translation of conventional phrase
structure rules into (Horn clause) logic that can be given to a
theorem prover, which then uses the logical translation to ``prove''
the well-formedness of sentences with respect to the original grammar
(hence the title, ``Parsing as Deduction'').
Secondly, Pereira and Warren show how standard context-free parsing
algorithms can be generalized as inference procedures that can
ultimately be used to mimic parsers for certain non-context-free
languages: thus showing us how to extend our parsing techniques for CF
languages (which we know fairly well) to non-context-free languages in
a straight- forward way. Thus we can talk about ``the Earley parsing
algorithm for LFG,'' for instance.
And finally, they make some theoretical comparisons between the
parsers so obtained for various different frameworks, and derive
various properties regarding the parsing complexity. Quite a lot for
an eight-page paper!
While not wanting to restrict discussion, I suggest that we
concentrate on only one of these issues, namely the central claim that
parsing can be viewed as deduction. In what sense is it correct to do
so? Does it make sense computationally or psychologically? Or
linguistically or (dare I say it at CSLI) philosophically?
Secondly, what about the logical translations that Pereira and
Warren suggest? Their translation into logical for a rule like
VP --> V NP PP
is something like the following (expressed informally)
a V followed by an NP followed by a PP
implies the existence of a VP.
But consider a sentence like
I saw the man with the telescope
on the reading where the man, not me, had the telescope. The antecedent
of the logical translation of the rule is met, so the VP with the reading
where I used the telescope to see the man should exist, simultaneously
with the VP with the reading where the man has the telescope as well.
That is, we are forced to infer the simultaneous existence of VPs
corresponding to BOTH readings of the sentence.
Is there a problem here? And if so, why doesn't Pereira and
Warren's ``deductive'' run into problems with such ambiguous
sentences? --Mark Johnson
!
Page 3 CSLI Newsletter November 21, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
``Morphological Structure of Kunparlang Verbs''
Carolyn Coleman, (Coleman@csli.arpa)
Thursday, November 21, 10 a.m., Conference Room
Kunparlang verbs are extremely complex morphologically. They
cross- reference Subject and Object functions, incorporate nominal
roots, use `applicative' derivational morphology, carry modal,
directional and aspectual affixes, and inflect for Tense and Mood.
There are two levels of hierarchical morphological structure:
(i) The stem, which carries all morphology having compositional semantics.
(ii) The lexical base, which caries all semantically idiosyncratic
morphology.
Kunparlang verbs undergo two types of reflexive operation which
have a partially complementary distribution and which have different
semantic effects on the verbs to which they apply. With the first
reflexive operation the reflexive subject is always an Actor; with the
second the reflexive subject may be an Undergoer. The second
reflexive operation has a range of meanings which match those of
mediopassive constructions in other languages as well as the reflexive
reading. Both reflexivizing operations are derivations that apply at
the level of the lexical base; given that they have the same
morphological status, there is a problem of how to semantically
characterize them in a manner that will clearly show the semantic
similarities and differences between them. --Carolyn Coleman
----------
PIXELS AND PREDICATES
Cleo Huggins
Wednesday, November 27, 1 p.m., Ventura Seminar Room
Graphic design is a profession that addresses problems with visual
communication. The issues that are covered by this area of work are
significant in the design of symbols.
There are problem solving methods used in design. One is the use
of semiotics, the study of signs. I will describe how this field of
study can be applied to the design process.
Graphic designers also study the interaction of graphic elements.
One might say that design is about harnessing these ``visual basics''
and using them to reach a communication goal. I will look at some of
these elements and indicate how they are used in visual communication.
Designers seldom work on the design of isolated symbols. Often
visual communication takes on the responsibility of systems. Signage
in a building and instructions/directions for a process, are examples
of graphic systems. In the design of a communication system one
develops a graphic language. Consistency is an important element in
the design of this language. I will look at a collection of computer
related graphic elements and emphasize the role of design at a global
level.
I recommend this talk to anyone who:
* Has a tendency to redesign icons to make them witty or clever.
* Believes that all graphic designers are advertising artists.
* Has their cousin design the graphic interface on their applications.
* Thinks that type is generally boring and would rather use
exciting typefaces more often.
* Is interested in how graphic design may be integrated into
work in other fields.
!
Page 4 CSLI Newsletter November 21, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
LOGIC SEMINAR
Proving Properties of Destructive LISP Programs
Ian Mason, Philosophy Dept., Stanford
Friday, November 22, 12:00-1:00, Math. Faculty Lounge, Room 383-N
----------
PIXELS AND PREDICATES
Fragments of Behavior
Luca Cardelli, Digital SRC
Wednesday, December 4, 1 p.m., Ventura Seminar Room
The talk discusses some general issues concerning visual
programming. A particular user interface paradigm is then presented,
where software tools can be visually assembled into larger and more
complex tools. The basic abstraction is called ``Fragment of
Behavior'' which is a thread of control with an interface (for
connecting to other fragments) and an interaction protocol (for direct
user interaction). Composition of fragments involves both a
composition of interfaces and of interaction protocols, and determines
how the different fragments behave and interact concurrently.
The goals are (1) allow different programmers to develop
``features'' of an application independently, including the user
interfaces, (2) provide a library of very basic tools which users can
custom-assemble, and (3) allow users to modify existing compound tools
by adding, removing, or changing features.
----------
SRI TALK
Unification Revisited
Jean-Louis Lassez, IBM Thomas J. Watson Research Center
Monday, November 25, SRI, Room EJ242
There are three main approaches to finitely represent sets of
solutions of equations in the Herbrand Universe. In Robinson's
classical approach the set of solutions is represented by an mgu which
is computed from the set of equations. We introduce a dual approach,
based on Plotkin's and Reynold's concept of anti-unification in which
the finite representation (mgs) is now ``lifted'' from the set of
solutions. A third approach proposed by Colmerauer is based on the
concept of eliminable variables.
The relationships between these three approaches are established.
This study provides an appropriate setting to address the problem
of solving systems of equations and inequations which arises in recent
extensions to Prolog. A key result is that the meta-equation
E = E1 v E2 v ... v En
admits solutions only in trivial cases. Two important corollaries
follow naturally. The first is Colmerauer's property of independences
of inequations. This means that deciding whether a system of
equations and inequations has solutions can be done in parallel. The
other corollary is a negative result; the set of solutions of a system
of equations and inequations can be finitely represented by mgu's only
in trivial cases. Consequently, one cannot obtain a simplified system
which is in ``solved'' form. This is unlike the case when only
equations are considered. Similar properties hold in inductive
inference when one attempts to generalize from sets of examples and
counter-examples.
!
Page 5 CSLI Newsletter November 21, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
FOUNDATIONS OF GRAMMAR
Foundations of Grammar group announces HUG
The FOG group met last week to hear Lauri Karttunen report on HUG,
a new development environment for unification-based grammars on Xerox
1100 workstations. HUG consists of four basic parts: (i) a
unification package, (ii) input/output routines for directed graphs,
(iii) an interpreter for rules and lexical entries, and (iv) an
Earley-style chart parser. All four are written in simple Interlisp-D
for transportability to other dialects of Lisp. The format for
grammar rules and lexical entries in HUG is based on SRI's PATR
system. In addition to its generic core, HUG contains facilities for
grammar development and debugging. These routines take full advantage
of the graphic capabilities of D-machines.
The grammar formalism in HUG is based on PATR. It is designed to
make it easy to encode anything from simple phrase structure grammars
to categorial grammars. From the parser's point of view, a grammar
rule is a single directed graph whose subparts correspond to syntactic
constituents. Lexical generalizations are expressed by means of
templates and lexical rules as in PATR. A Prolog-style treatment of
long-distance dependencies is built in the system.
HUG is now available for use at CSLI. The documentation is
currently in two sections. HUG.DOC (11 pages) in {HEINLEIN:}<HUG>
explains HUG's format for rules and lexical entries. HUGTOOLS.DOC (24
pages) is a user's manual. A section HUG's parser and unification
routine is in preparation. For hard copies of these documents, see
Carol Kiparsky (Carol@csli.arpa).
----------
SUMMARY OF LOGIC SEMINAR
Truth, the Liar, and Circular Propositions, Cont.
Jon Barwise and John Etchemendy (Barwise@csli.arpa)
Friday, November 15, 1985
The previous time John Etchemendy gave an informal introduction to
Peter Aczel's set theory ZF/AFA, and showed how to use it to model the
Austinian conception of proposition. He then discussed how the Liar,
the truth-teller, and other paradoxes and puzzles come out in this
model. This time, I reviewed ZF/AFA very briefly and then used it to
model the Russellian conception of proposition, and discuss how the
same puzzles come out in this model. --Jon Barwise
(The announcement of the above Logic Seminar was accidently omitted
from last week's newsletter.)
-------
∂21-Nov-85 0933 EMMA@SU-CSLI.ARPA Newsletter addition
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 21 Nov 85 09:33:34 PST
Date: Thu 21 Nov 85 09:26:28-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter addition
To: friends@SU-CSLI.ARPA
Tel: 497-3479
The SRI talk by Jean-Louis Lassez on November 25 will be at 2:00.
-------
∂21-Nov-85 1754 WINOGRAD@SU-CSLI.ARPA No ENVIRONMENTS meeting until Dec 9
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 21 Nov 85 17:54:12 PST
Date: Thu 21 Nov 85 17:47:16-PST
From: Terry Winograd <WINOGRAD@SU-CSLI.ARPA>
Subject: No ENVIRONMENTS meeting until Dec 9
To: friends@SU-CSLI.ARPA, su-bboards@SU-CSLI.ARPA
Sorry to have missed the newsletter with this. There will
be no meeting for the next two weeks. We will resume with
Danny Bobrow (Xerox) on the handling of object storage in LISP
(with COMMONLOOPS) on Dec 9, then Ron Kaplan (Xerox and CSLI) on
the grammar writer's workbench on Dec 16.
--t
-------
∂23-Nov-85 0352 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Nov. 26 (E. Clark, Stanford)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 23 Nov 85 03:52:18 PST
Received: from ucbvax.berkeley.edu by SU-CSLI.ARPA with TCP; Sat 23 Nov 85 03:42:36-PST
Received: by ucbvax.berkeley.edu (5.31/1.7)
id AA00636; Thu, 21 Nov 85 12:17:32 PST
Received: by cogsci (5.31/5.16)
id AA09584; Thu, 21 Nov 85 12:14:19 PST
Date: Thu, 21 Nov 85 12:14:19 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8511212014.AA09584@cogsci>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Nov. 26 (E. Clark, Stanford)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, November 26, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``Contrast as a Constraint in Acquisition''
Eve V. Clark
Department of Linguistics, Stanford University
[eclark@su-psych.arpa]
Speakers of a language tacitly subscribe to what I will call
the Principle of Contrast, namely that a difference in form
marks a difference in meaning. This principle, I propose,
offers a powerful tool to children acquiring language. It
serves to constrain the inferences they make about possible
meanings for new forms in the lexicon, in morphology, and in
syntax, by distinguishing them from the meanings of forms
already familiar. If the Principle of Contrast is observed by
children, three major predictions follow: (i) differences in
form should be taken to signal differences in meaning, (ii)
established forms should take priority over innovative ones,
and (iii) gaps in the lexicon should be filled on the one hand
by unfamiliar words and on the other by lexical innovations.
In this talk, I examine these predictions and show that
each is strongly supported by acquisition data. Children
appear to observe the Principle of Contrast from very early. I
will also argue that this principle offers a means for getting
rid of unconventional, over-regularized forms in the lexicon,
in morphology, and in syntax. The assumption that different
forms have different meanings is as indispensable in acquisi-
tion as it is in everyday use.
---------------------------------------------------------------
UPCOMING TALKS
December 3: Bernard Baars, Langley Porter, UCSF
---------------------------------------------------------------
∂25-Nov-85 1600 EMMA@SU-CSLI.ARPA The Next TINLunch
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 25 Nov 85 15:59:58 PST
Date: Mon 25 Nov 85 15:51:51-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: The Next TINLunch
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Here is a description for the December 5 Tinlunch:
``A Humanistic Rationale for Technical Writing''
by Carolyn R. Miller
Discussion led by Geoff Nunberg
This paper is typical of a number of recent articles by
sociologists, rhetoricians, and humanistically-trained writing
specialists, which insist that scientific writing is no less
rhetorical in its means and effects than is writing of an explicitly
belletristic sort. Whether or not we find their arguments compelling,
these articles raise interesting questions for producers and consumers
of technical prose, especially in intellectually self-conscious
disciplines like philosophy, AI, and linguistics. For example: What is
the common understanding of the research enterprise that underlies the
linguistic conventions characteristic of scientific prose, such as the
avoidance of ``I'' and the unusual uses of ``we,'' the frequent use of
impersonal constructions, the numbering of paragraphs, and so on? Can
we apply the apparatus of traditional rhetoric to the evaluation of
the expository usefulness of particular formal languages and
notational conventions? Is there grounds for distinguishing between a
``rhetoric of information,'' concerned with the selection and
arrangement of factual observations, and a ``rhetoric of
description,'' concerned with the linguistic means used to report such
observations?
The TINLunch will be held in the Ventura Conference Room at noon.
-------
∂02-Dec-85 0947 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Dec. 3, 1985
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Dec 85 09:46:56 PST
Received: from ucbvax.berkeley.edu by SU-CSLI.ARPA with TCP; Mon 2 Dec 85 09:37:20-PST
Received: by ucbvax.berkeley.edu (5.31/1.7)
id AA16750; Mon, 2 Dec 85 09:44:17 PST
Received: by cogsci (5.31/5.16)
id AA29615; Mon, 2 Dec 85 09:43:33 PST
Date: Mon, 2 Dec 85 09:43:33 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8512021743.AA29615@cogsci>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Dec. 3, 1985
Cc: admin@cogsci.berkeley.edu
BERKELEY COGNITIVE SCIENCE PROGRAM
Fall 1985
Cognitive Science Seminar - IDS 237A
Tuesday, December 3, 11:00 - 12:30
240 Bechtel Engineering Center
Discussion: 12:30 - 1:30 in 200 Building T-4
``An Approach to Conscious Experience''
Bernard J. Baars
Langley Porter Neuropsychiatric Institute, U.C.S.F.
Conscious experience has been widely viewed as a confusing
and ill-defined issue, and most psychologists have avoided it
until quite recently. However, there are straightforward ways
to specify reliable empirical constraints on the problem, sim-
ply by contrasting comparable pairs of events, one of which is
conscious and the other not. For example, we are typically
unconscious of highly predictable stimuli, though there is
strong evidence that such stimuli continue to be represented in
the nervous system. We are unconscious of automatized actions,
of the unattended stream in a selective attention paradigm, of
conceptual presuppositions, of the unconscious meaning of per-
ceptual and linguistic ambiguities, of lexical access, syntac-
tic rule-application, etc. In all these cases the unconscious
information continues to be represented and processed. Any
complete theory of conscious experience is bounded by, and must
ultimately account for, the entire set of such contrasts.
The empirical constraints converge on a model of the ner-
vous system as a distributed collection of specialists---
automatic, unconscious, and very efficient. Consciousness is
associated in this system with a "global workspace"---a memory
whose contents are broadcast to all the specialists. Special-
ists can complete or cooperate for access to the global
workspace, and those that succeed can recruit and control other
specialists in pursuit of their goals. Over the past seven
years this Global Workspace approach has been extended to a
number of puzzling issues, including action control and the
neurophysiological basis of consciousness.
----------------------------------------------------------------
ELSEWHERE
Peter Labudde, SESAME Group visiting scholar from EMS
Samedan/Switzerland, will speak on "Experiments for students in
everyday physics" at the SESAME Colloquium on Monday, December
2 at 4:00 p.m. in 2515 Tolman Hall, Campus.
Jim Greeno, EMST and Cognitive Science Program, will speak on
"How Problems Differ" at the Cognitive Psychology Colloquium on
Friday, December 6 at 4:00 p.m. in the Beach Room, 3105 Tolman
Hall, Campus.
----------------------------------------------------------------
∂04-Dec-85 1702 EMMA@SU-CSLI.ARPA Newsletter December 5, No. 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Dec 85 17:02:28 PST
Date: Wed 4 Dec 85 16:30:33-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter December 5, No. 4
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
December 5, 1985 Stanford Vol. 3, No. 4
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, December 5, 1985
12 noon TINLunch
Ventura Hall A Humanistic Rationale for Technical Writing
Conference Room by Carolyn R. Miller
Discussion led by Geoff Nunberg (Nunberg@csli)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, December 12, 1985
12 noon TINLunch
Ventura Hall Title to be announced
Conference Room by Werner Frey and Hans Kamp
Discussion led by Werner Frey
(Abstract will appear next week)
2:15 p.m. CSLI Seminar
No seminar this week
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Title to be announced
Lynn Bloom
←←←←←←←←←←←←
ANNOUNCEMENT
Please note that the seminar and colloquium are no longer in Redwood
Hall room G-19. The new room will be announced in next week's
newsletter.
!
Page 2 CSLI Newsletter December 5, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S TINLUNCH
A Humanistic Rationale for Technical Writing
by Carolyn R. Miller
Discussion led by Geoff Nunberg
This paper is typical of a number of recent articles by
sociologists, rhetoricians, and humanistically-trained writing
specialists, that insist that scientific writing is no less
rhetorical in its means and effects than is writing of an explicitly
belletristic sort. Whether or not we find their arguments compelling,
these articles raise interesting questions for producers and consumers
of technical prose, especially in intellectually self-conscious
disciplines like philosophy, AI, and linguistics. For example: What is
the common understanding of the research enterprise that underlies the
linguistic conventions characteristic of scientific prose, such as the
avoidance of ``I'' and the unusual uses of ``we'', the frequent use of
impersonal constructions, the numbering of paragraphs, and so on? Can
we apply the apparatus of traditional rhetoric to the evaluation of
the expository usefulness of particular formal languages and
notational conventions? Is there grounds for distinguishing between a
``rhetoric of information'', concerned with the selection and
arrangement of factual observations, and a ``rhetoric of description,''
concerned with the linguistic means used to report such observations?
----------
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
Obligatory Control in Clausal Complements
Draga Zec (Zec@csli)
Thursday, December 5, 10:00, Ventura Conference Room
It is generally held that obligatory control correlates with the
non-finiteness of the complement. Both syntactic and semantic theories
of control have crucially depended on this particular assumption. My
intention is to present a case of obligatory control into clausal
complements, develop an analysis within the LFG framework, and then
explore the implications of this case for an adequate treatment of control.
Serbo-Croatian has two types of clausal complements, Type 1 which
is generally uncontrolled, and Type 2 which allows obligatory control
with predicates like `try', `intend', `persuade', `force', etc. It
will be shown that Type 2 complements cannot be dealt with in terms of
the LFG theory of functional control, or any other syntactic theory of
control. Rather, it will be argued that these complements are a clear
case of what in LFG is referred to as anaphoric control. Certain
differences in anaphoric binding properties between the two complement
types are attributed to the phenomenon of obviation which is found
with Type 2 but not with Type 1 complements.
Since anaphoric control cannot capture the systematic
controller/controllee relation characteristic of obligatory control,
one will have to make reference to the semantic or, more precisely,
thematic properties of control-inducing predicates. This may have
implications for syntactic theories of obligatory control, whose aim
is to make predictions about controller/controllee relations solely in
syntactic terms. This case will also be relevant for the semantic
analyses that account for control solely in terms of entailment.
--Draga Zec
Everyone interested in the syntax and semantics of control constructions
and their implications for linguistic theory is invited. Written
copies of the paper are available at the CSLI Receptionist's desk.
!
Page 3 CSLI Newsletter December 5, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
PIXELS AND PREDICATES
Spatial Parsing for Visual Languages
Fred Lakin (Lakin@csli)
1:00 pm, Wednesday, December 11, CSLI trailers
Graphics are very important in human/computer interaction. To
participate effectively in this kind of interaction, computers must be
able to understand how humans use graphics to communicate. When a
person employs a text and graphic object in communication, that object
has meaning under a system of interpretation, or ``visual language''.
A first step toward computer understanding of the visual communication
objects used by humans is computer parsing of such objects, recovering
their underlying syntactic structure. The research described in this
paper combines computer graphics, symbolic computation and textual
linguistics to accomplish ``spatial parsing'' for visual languages.
Parsing has been investigated in four visual languages: VennLISP (a
visual programming language based on LISP), VIC (a visual
communication system for aphasics), FSA (finite state automaton)
diagrams, and SIBTRAN (graphic devices for organizing textual sentence
fragments). A parser has been written which can recover the structure
for graphic communication objects in the different visual languages.
In addition, interactive visual communication assistants utilize
modules from the parser to actively assist humans using two of the
visual languages.
----------
LOGIC SEMINAR
Proving Properties of Destructive LISP Programs (cont.)
Ian Mason, Philosophy Dept., Stanford
Friday, December 6, 12:00-1:00, Math. Faculty Lounge, Room 383-N
----------
ENVIRONMENTS GROUP MEETING
Grammar-writer's Workbench
Ronald Kaplan, Xerox and CSLI (Kaplan@xerox)
Monday, December 9, noon, Ventura Seminar Room
----------
PHONOLOGY/PHONETICS MEETING
Adequacy in Intonation Analysis: The Case of Dutch
Carlos Gussenhoven
Wednesday, December 11, 3:30, Ventura Conference Room
!
Page 4 CSLI Newsletter December 5, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AFT MEETINGS
In the winter term, I shall start the regular meetings of the AFT
(Aitiational Frame Theory) Project on lexical representation. The
meetings will be once a week on Tuesdays at 11, and the first one will
be on January 14. The room will be announced later.
The project will be concerned with the construction of an adequate
theory of lexical representation and lexical meaning. While my
interests center around the AFT proposal, the main aim will be to
compare available theories of lexical meaning and come up with what
will seem to the group to be the best one.
In addition to the weekly meetings, there will be guest
presentations by speakers such as Joseph Almog (UCLA), Nathan Salmon
(UCSB), and Scott Soames (Princeton).
The following is a partial list of topics to be discussed.
a. Lexical meaning and the needed input for compositional semantics.
b. Lexical meaning and the needed input for syntax.
c. Lexical meaning and non-monotonic reasoning.
d. AFT and doubts about the claim that natural languages are formal
languages.
e. Lexical meaning and ``psychological reality.''
f. Lexical meaning, AFT, and morphology.
I would appreciate it if those who are interested in joining this
group would contact me via computer mail (Julius@csli) or phone
((415)497-2130) during the next couple of weeks. --Julius Moravcsik
-------
∂05-Dec-85 1223 EMMA@SU-CSLI.ARPA Newsletter correction
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 5 Dec 85 12:23:01 PST
Date: Thu 5 Dec 85 11:40:27-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter correction
To: friends@SU-CSLI.ARPA
Tel: 497-3479
The name of the speaker for the December 12 colloquium is Bjorn
Lindblom not Lynn Bloom as stated in the newsletter. The abstract for
the colloquium follows.
CSLI COLLOQUIUM
Thursday, December 12, 1985, 4:15
Turing Auditorium (not Redwood Hall room G-19)
THEMES IN THE EVOLUTIONARY BIOLOGY OF LANGUAGE
A three-ring circus
Bjorn Lindblom, Peter MacNeilage, Michael Studdert-Kennedy
CASBS, Stanford
The goal of our research is summarized by the phrase: DERIVE LANGUAGE
FROM NON-LANGUAGE! We are exploring an approach to the biology of
language that is deliberately distinct from that pursued within
Chomskyan autonomous linguistics. We take as our first priority an
explicit search for precursors to all aspects of language structure
and speech behavior. By precursors we mean either evolutionary
precursors, traceable to lower animals, or those human but
non-linguistic, cognitive, perceptual and motor capacities that both
constrain language and make it possible. Only when a search for such
precursors has failed can we justly term some characteristic
unique---either to language or to man---and attribute it to some
species-specific bioprogram for language learning and use (cf.
universal grammar). In short, while we acknowledge and respect the
discoveries of formal linguistics, we believe that a sound approach to
the biology of language must go beyond form and structure to ask:
``How did language get that way?''
A major language universal for which any phylogenetic or ontogenetic
theory must account is LA DOUBLE ARTICULATION, or DUALITY of
patterning. We view the emergence of duality---that is, the use of
discrete elements and combinatorial rules at the two levels of
phonology and syntax---as the key to the communicative power of
language: duality provides a kind of impedance match between the
essentially unlimited semantics of cognition and a decidedly limited
set of signaling devices and processes.
Our central concern is with phonology: with the origin of discrete
phonological elements---phonemes and features---and with the processes
by which these intrinsically amodal elements are instantiated in the
modalities of production and perception. We shall review typological
facts about sound structure leading us to conclude that phonological
form adapts to the performance constraints of language use. How do we
choose our theoretical formalism for describing sound patterns?
Markedness theory or contemporary developments of generative phonology
and formal linguistics? No, since (i) spoken language is a product of
biological and cultural evolution; and (ii) there is considerable
empirical justification for viewing phonologies as adaptations to
biological and social selectional ⊂pressures the correct choice
appears to be some variant of the theoretical framework currently
explored by many students of biological and cultural evolution, viz.,
a Darwinian VARIATION*SELECTION model. In our talk we will present a
computational implementation of such a model. We will illustrate some
of its explanatory power by means of simulations indicating how a
number of typological facts can be predicted quantitatively and how
the emergence of ``a featural and segmental'' organization of lexical
systems can be derived in a self-organizing manner and deductively
(rather than just axiomatically). Drawing on corpora of speech error
data we describe the process by which discrete elements are organized
and sequenced in an actual utterance (phonologically and
syntactically) as one of inserting elements into a structured frame,
and, in our talk, we will consider the evolutionary relation between
this FRAME-CONTENT mode of organization and bimanual coordination.
Finally we will consider behavioral and neurophysiological evidence,
from both adults and children, consistent with a view of the
phonological element as an AMODAL structure linking production and
perception.
*Turing Auditorium is near Ventura Hall.
-------
∂09-Dec-85 1720 @SU-CSLI.ARPA:WALDINGER@SRI-AI.ARPA seminar on program transformation wednes, 3:45
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 9 Dec 85 17:20:01 PST
Received: from SRI-AI.ARPA by SU-CSLI.ARPA with TCP; Mon 9 Dec 85 17:16:55-PST
Date: Mon 9 Dec 85 17:09:43-PST
From: WALDINGER@SRI-AI.ARPA
Subject: seminar on program transformation wednes, 3:45
To: AIC-Associates: ;,
CSL: ;, su-bboards@SU-AI.ARPA, friends@SU-CSLI.ARPA, bboard@SRI-AI.ARPA
Title: A Closer Look at the Tupling Strategy for Program Transformation
Speaker: Alberto Pettorossi, IASI-CNR, Rome, Italy
Place: EK242 (Old AIC Conference Room), SRI International,
Ravenswood Avenue and Pine Street
Time: 3:45 pm Wednesday, 11 December
Coffee in Waldinger office at 3:15
Abstract:
Tupling is a strategy for transforming programs expressed as recursive
equations. We see how it applies to some challenging "little"
problems: the tower of Hanoi, the Chinese rings problem, drawing
Hilbert curves, and computing recurrence relations. We characterize
the power of the tupling strategy in terms of the structure of the
graphs we obtain by unfolding the functions of the programs.
-------
∂11-Dec-85 1752 EMMA@SU-CSLI.ARPA Newsletter December 12, No. 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 11 Dec 85 17:52:20 PST
Date: Wed 11 Dec 85 17:07:17-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter December 12, No. 5
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
December 12, 1985 Stanford Vol. 3, No. 5
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR *THIS* THURSDAY, December 12, 1985
12 noon TINLunch
Ventura Hall Plural Determiners and Plural Anaphora
Conference Room by Werner Frey and Hans Kamp
Discussion led by Werner Frey
(Abstract on page 1)
2:15 p.m. CSLI Seminar
No seminar this week
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Themes in the Evolutionary Biology of Language:
A three-ring circus
Bjorn Lindblom, CASBS
(Abstract on page 2)
--------------
ANNOUNCEMENT
Please note that the seminar and colloquium are no longer in Redwood
Hall room G-19. The new room is Turing Auditorium in Jordan Quad.
--------------
THIS WEEK'S TINLUNCH
Plural Determiners and Plural Anaphora
Werner Frey, University of Texas
Werner Frey will discuss his and Hans Kamp's work on plural noun
phrases, focusing on:
a) The interpretation of anaphoric plural pronouns, with special
attention to the ways in which it differs from that of singular
pronouns.
b) The difference between definite plural NP's, such as `the boys'
and `indefinite' plurals, such as e.g., `many boys'.
c) The nature of the definite article `the', both in its plural and
its singular uses.
Copies of a longer abstract will be available at the Ventura desk.
!
Page 2 CSLI Newsletter December 12, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S COLLOQUIUM
Themes in the Evolutionary Biology of Language: A three-ring circus
Bjorn Lindblom, Peter MacNeilage, Michael Studdert-Kennedy, CASBS
The goal of our research is summarized by the phrase: DERIVE
LANGUAGE FROM NON-LANGUAGE! We are exploring an approach to the
biology of language that is deliberately distinct from that pursued
within Chomskyan autonomous linguistics. We take as our first priority
an explicit search for precursors to all aspects of language structure
and speech behavior. By precursors we mean either evolutionary
precursors, traceable to lower animals, or those human but
non-linguistic, cognitive, perceptual and motor capacities that both
constrain language and make it possible. Only when a search for such
precursors has failed can we justly term some characteristic
unique---either to language or to man---and attribute it to some
species-specific bioprogram for language learning and use (cf.
universal grammar). In short, while we acknowledge and respect the
discoveries of formal linguistics, we believe that a sound approach to
the biology of language must go beyond form and structure to ask:
``How did language get that way?''
A major language universal for which any phylogenetic or
ontogenetic theory must account is LA DOUBLE ARTICULATION, or DUALITY
of patterning. We view the emergence of duality---that is, the use of
discrete elements and combinatorial rules at the two levels of
phonology and syntax---as the key to the communicative power of
language: duality provides a kind of impedance match between the
essentially unlimited semantics of cognition and a decidedly limited
set of signaling devices and processes.
Our central concern is with phonology: with the origin of discrete
phonological elements---phonemes and features---and with the processes
by which these intrinsically amodal elements are instantiated in the
modalities of production and perception. We shall review typological
facts about sound structure leading us to conclude that phonological
form adapts to the performance constraints of language use. How do we
choose our theoretical formalism for describing sound patterns?
Markedness theory or contemporary developments of generative phonology
and formal linguistics? No, since (i) spoken language is a product of
biological and cultural evolution; and (ii) there is considerable
empirical justification for viewing phonologies as adaptations to
biological and social selectional pressures the correct choice appears
to be some variant of the theoretical framework currently explored by
many students of biological and cultural evolution, viz., a Darwinian
VARIATION*SELECTION model. In our talk we will present a computational
implementation of such a model. We will illustrate some of its
explanatory power by means of simulations indicating how a number of
typological facts can be predicted quantitatively and how the
emergence of ``a featural and segmental'' organization of lexical
systems can be derived in a self-organizing manner and deductively
(rather than just axiomatically). Drawing on corpora of speech error
data we describe the process by which discrete elements are organized
and sequenced in an actual utterance (phonologically and syntactically)
as one of inserting elements into a structured frame, and, in our
talk, we will consider the evolutionary relation between this
FRAME-CONTENT mode of organization and bimanual coordination. Finally
we will consider behavioral and neurophysiological evidence, from both
adults and children, consistent with a view of the phonological
element as an AMODAL structure linking production and perception.
!
Page 3 CSLI Newsletter December 12, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
Reflexivization Variation:
Relations between Syntax, Semantics, and Lexical Structure
Peter Sells, Annie Zaenen, Draga Zec
Thursday, December 12, 10:00, Ventura Conference Room
In this paper we examine the distinction between so-called
transitive and intransitive reflexive construction in several
languages (English, Finnish, German, Dutch, Chichewa, Warlpiri,
Serbo-Croatian and Japanese). We argue that three types of
distinctions have to be made: transitivity versus intransitivity in
the lexicon, synthetic versus analytic forms in the constituent
structure and open versus closed predicates in the semantics; thus
there are three relevant levels of possible cross-linguistic
variation. While there is a one-way implication between lexical
intransitivity and closed predication, there are in general no direct
correlations between either the lexical forms or the semantic forms
and their constituent structure representation.
We give an account of the different types of reflexive that we
discuss in Lexical-Functional Grammar augmented with Discourse
Representation Structures.
Copies of the paper are available at the front desk.
----------
CSLI SEMINAR
NETTALK: Teaching a Massively-Parallel Network to Talk
Terrence J. Sejnowski, Johns Hopkins
1:00pm, Wednesday, December 18, CSLI trailers
A special seminar in place of Pixels and Predicates
Text to speech is a difficult problem for rule-based systems
because English pronunciation is highly context dependent and there
are many exceptions to phonological rules. An alternative knowledge
representation for correspondences between letters and phonemes will
be described in which rules and exceptions are treated uniformly and
can be determined with a learning algorithm in a connectionist model.
The architecture is a layered network of 400 simple processing units
with 9,000 weights on the connections between the units. The training
corpus is continuous informal speech transcribed from tape recordings.
Following training on 1000 words from this corpus the network can
generalize to novel text. Even though this network was not designed
to mimic human learning, the development of the network in some
respects resembles the early stages in human language acquisition.
Following damage of the network by either removal of units or addition
of random values to the weights the performance of the network
degraded gracefully. Issues which will be addressed include scaling
of the learning algorithm with the size of the problem, robustness of
learning to predicate order of the problem, and universality of
learning in connectionist models.
!
Page 4 CSLI Newsletter December 12, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ENVIRONMENTS GROUP MEETING
DefinitionGroups: Organizing Programs in Time and Space
Daniel Bobrow, Xerox PARC
Monday, 12:00, December 16, Ventura Trailers
Most current systems use files for long term storage, and to
organize the conceptual structure of the system. The definition
groups project (with Daniel Bobrow, David Fogelsong and Mark Miller)
is exploring an object oriented approach to the organization of a
system, and to the maintenance of sequential incremental changes. It
will also support the exploration of alternative development paths.
Extensive use of browsers allows alternative views and interaction
with the program structure.
DEFGROUPS uses current file system as a base, but is set up to move
to a database system. A prototype of the system is currently working,
and supporting its own development.
-------
∂19-Dec-85 1533 EMMA@SU-CSLI.ARPA Newsletter December 19, No. 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 19 Dec 85 15:33:26 PST
Date: Thu 19 Dec 85 15:11:49-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter December 19, No. 6
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
December 19, 1985 Stanford Vol. 3, No. 6
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THURSDAY, January 9, 1986
12 noon TINLunch
Ventura Hall Some Remarks on How Words Mean
Conference Room by Georgia Green
Discussion led by Susan Stucky
(Abstract will be in the next newsletter)
2:15 p.m. CSLI Seminar
Whither CSLI? II
John Perry, CSLI Director
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
None planned
--------------
ANNOUNCEMENT
Please note that the seminar and colloquium are no longer in Redwood
Hall room G-19. The new location will be announced in early January.
Please also note that no activities are planned for December 19 and
26. Activities and the newsletter will resume January 9.
--------------
TALK ON JANUARY 2
On Thursday, January 2 at 2:15, Alexis Manaster-Ramer will discuss
``Finding Natural Languages a Home in Formal Language Theory''
(coauthored with William C. Rounds and Joyce Friedman, abstract to be
distributed later). The talk will be in Ventura Hall.
--------------
TENTATIVE WINTER QUARTER SCHEDULE
THURSDAY SEMINARS:
Date Speaker or Organizer
January 9 John Perry
January 16 Embedded Computation: Research on Situated Automata
January 23 Embedded Computation: Semantically Rational Computer
Languages
January 30 Helene Kirchner
February 6 Embedded Computation: Representation and Reasoning
February 13 Semantics of Computational Languages
February 20 Carol Cleland
February 27 Linguistic Approaches to Computer Languages
March 6 Mats Rooth
!
Page 2 CSLI Newsletter December 19, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THURSDAY COLLOQUIA: (one colloquium during each period)
Time Organizers
January 9 to January 23: Embedded Computation group and
Document Preparation group
January 30 to February 13: Embedded Computation group,
Computational Language Semantics group and
Linguistic Approaches to Computer Languages
group
--------------
INTERACTIONS OF MORPHOLOGY, SYNTAX, AND DISCOURSE
Pronominal Incorporation in Finnish Possessives
Jonni Kanerva, (jkanerva@csli.arpa)
Thursday, December 19, 10:00, Ventura Conference Room
A class of five morphemes in Finnish, traditionally called
possessive suffixes (henceforeward Px), raises interesting questions
about the relationship of morphological structure to syntactic
functions. Px's appear to be pronominal, anaphoric, or even
agreement-like elements that occur on nominals and nonfinite verbs
following case suffixes. They are important syntactically: among
other things, they occur as possessors of nouns and as subjects of
nonfinite clauses. The very importance of Px's in the syntax tempts
one to analyse them as syntactic units---clitics---that are joined
phonologically to host words, as two recent analyses have done.
Nonetheless, a number of facts in Finnish indicate that these
syntactic functions are born by truly morphological
units---suffixes---rather than clitics.
I argue from phonological, morphological, and semantic evidence.
First, any allomorphy or phonological alternation in Finnish that is
sensitive to word boundaries treats the undisputed suffixes and the
Px's alike as being inside the word and treats a class of clitics as
being outside the word. Second, the occurrence of Px's is sometimes
dependent on the specific morphological structure of the stem. Third,
a large number of semantically idiosyncratic lexical items containing
Px's provide further support for a suffixal analysis of Px's, insofar
as suffixes are more susceptible to idiosyncratic lexicalization than
clitics. I argue next against the possibility that Px's are lexical
level clitics (i.e., clitics that attach to words at the morphological
level) by showing that it is quite costly to the theory of lexical
phonology to have a lexical level in Finnish that contains all of the
undisputed suffixes yet excludes the Px's; hence Px's must occupy the
same lexical level as other suffixes. Considering, then, all of the
evidence favoring a suffixal analysis for the Px's, especially the
morphological interactions between Px's and their stems, it is
extremely weak to set Px's apart from the other suffixes solely on the
basis of morpheme order. This indicates that the Px's are indeed
suffixes and therefore that a syntactic analysis of Px's should be
consistent with this finding.
!
Page 3 CSLI Newsletter December 19, 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF A CSLI TALK
Agreement in Arabic, Binding and Extended Coherence
Abdelkader Fassi Fehri, (Fehri@csli.arpa)
We provide a fragment of a conceptual framework in which agreement
phenomena can be naturally characterised in correlation with
grammatical functions, and the appropriate well-formedness constraints
on functional structres would have as an effect to rule out agreement
relations that are unlikely to occur in natural languages. More
specifically, we assume a taxonomy of grammatical functions
distinguishing three classes: lexical and nuclear grammatical
functions (Subj, Obj, Obl, ...), non-lexical but nuclear grammatical
functions (Adjunct, Modifier), and non-lexical non-nuclear (Top, Foc,
...). Non-lexical non-nuclear grammatical functions are some times
called discourse functions or DFs. We argue that Coherence, whose
initial essential role in KB was to ensure the duplication in the
syntax of lexical-government relations, should be redefined to extend
to non-lexical nuclear as well as non-lexical non-nuclear grammatical
functions. We furthermore argue for a typology of agreement
distinguishing `grammatical' agreement (GAgr) from `anaphoric'
agreement (AAgr). GAgr is with lexical grammatical functions, AAgr
with non-lexical grammatical functions. Our proposal is that what
appears to be an anaphoric agreement marker is in fact an incorporated
pronoun. The different types of agreement fall out as effects of our
Extended Coherence Condition plus other independently motivated
well-formedness conditions on functional structures.
-------
∂06-Jan-86 1445 JAMIE@SU-CSLI.ARPA Thursday Events
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 6 Jan 86 14:45:30 PST
Date: Mon 6 Jan 86 14:43:01-PST
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: Thursday Events
To: friends@SU-CSLI.ARPA
We do not have the use of Redwood G-19 on Thursdays this winter
because of regularly scheduled classes being held there. So far, I
have not found another room, but hope to do so soon. Check your
mail for the location of this Thursday's events.
-- Jamie
-------
∂08-Jan-86 1313 EMMA@SU-CSLI.ARPA This week's TINLunch
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 8 Jan 86 13:13:47 PST
Date: Wed 8 Jan 86 13:10:47-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: This week's TINLunch
To: friends@SU-CSLI.ARPA
Tel: 497-3479
THIS WEEK'S TINLUNCH
Some Remarks on How Words Mean
by Georgia Green
Green (in her 1983 paper ``Some remarks on how words mean'') makes
the claim that a large class of common nouns nouns such as `cat' and
`pencil' in English are best viewed as not having meaning, that is, as
not having senses or intensions. Instead, she argues, such common
nouns are used to refer ``as names for kinds of objects or properties
(or events, or whatever)''. However, what is most interesting about
her point of view is not the claim about the name-like character of
common nouns, but rather that her analysis relies on a three-way
distinction between the language, the language-user and the world.
For instance, the ambiguity between kind-level and object-level uses
of nouns is not, as in Carlson's (l977) analysis, based on differences
in the language, but, rather, on differences in the use of language,
or, in differences in how people refer. She tells linguists that it
is nonsensical to do a semantic analysis of the word `clock' because
what is really the object of study is the kind that the word `clock'
names. Typically, discussions about the language/world relation and
discussions about the language/people relation are carried on
separately by different groups of people. This paper will serve us
well, I think, as a springboard for discussion of just how, on the
tri-partite view, we are going to separate out facts about language
from facts about people, and both of those from facts about the world.
--Susan Stucky
-------
∂08-Jan-86 1758 EMMA@SU-CSLI.ARPA Newsletter January 9, No. 7
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 8 Jan 86 17:58:14 PST
Date: Wed 8 Jan 86 16:53:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter January 9, No. 7
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
January 9, 1986 Stanford Vol. 3, No. 7
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, January 9, 1986
12 noon TINLunch
Ventura Hall Some Remarks on How Words Mean
Conference Room by Georgia Green
Discussion led by Susan Stucky (Stucky@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Turing Aud. Whither CSLI? II
Polya Hall John Perry, CSLI Director
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, January 16, 1986
12 noon TINLunch
Ventura Hall Generalized Quantifiers and Plurals
Conference Room by Godehard Link
Discussion led by Mats Rooth (Rooth@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
To be announced
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
None planned
--------------
ANNOUNCEMENT
Please note that the seminar and colloquium are no longer in
Redwood Hall room G-19. We are trying to get a new place; however,
the university will not schedule a room until the second week of the
quarter. This week's seminar is in Turing Auditorium which is at one
end of Polya Hall, just behind Redwood.
This newsletter is available and the CSLI computers (CSLI and
Russell) are working this week due largely to the effort of Joe
Zingheim in installing temporary chillers in the computer room on
schedule. He and the others who also put in extra hours or
extraordinary effort receive our thanks.
!
Page 2 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S TINLUNCH
Some Remarks on How Words Mean
by Georgia Green
Green (in her 1983 paper ``Some remarks on how words mean'') makes
the claim that a large class of common nouns such as `cat' and
`pencil' in English are best viewed as not having meaning, that is, as
not having senses or intensions. Instead, she argues, such common
nouns are used to refer ``as names for kinds of objects or properties
(or events, or whatever)''. However, what is most interesting about
her point of view is not the claim about the name-like character of
common nouns, but rather that her analysis relies on a three-way
distinction between the language, the language-user and the world.
For instance, the ambiguity between kind-level and object-level uses
of nouns is not, as in Carlson's (l977) analysis, based on differences
in the language, but, rather, on differences in the use of language,
or, in differences in how people refer. She tells linguists that it
is nonsensical to do a semantic analysis of the word `clock' because
what is really the object of study is the kind that the word `clock'
names. Typically, discussions about the language/world relation and
discussions about the language/people relation are carried on
separately by different groups of people. This paper will serve us
well, I think, as a springboard for discussion of just how, on the
tri-partite view, we are going to separate out facts about language
from facts about people, and both of those from facts about the world.
--Susan Stucky
--------------
NEXT WEEK'S TINLUNCH
Generalized Quantifiers and Plurals
by Godehard Link
This paper reviews part of Link's logic of plurals and mass terms
and applies it to a variety of quantificational constructions. Link
argues that some but not all complex plural NPs express genuine plural
quantification. An example of genuine plural quantification is ``any
two men'', which can denote a generalized quantifier, the elements of
which are properties of groups. Other issues discussed include
floated quantifiers, numerals, and the German particle ``je''.
--Mats Rooth
--------------
FOUNDATIONS OF GRAMMAR
On Phrase Structure
Alexis Manaster-Ramer
Thursday, January 9, 4:15 p.m., Ventura Conference Room
As a special FOG event, we will take advantage of Alexis
Manaster-Ramer's brief return to the Bay Area. All are invited, but
the talk should be of special interest to members of the FOG project.
!
Page 3 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
COMPUTATIONAL MODELS OF SPOKEN LANGUAGE
Exploiting Equivalence Sets to Recognize Speech: The NEXUS Project
Gary Bradshaw, Ph.D.
Institute of Cognitive Science, University of Colorado
Thursday, January 9, 10 a.m., Ventura Conference Room
Theoretical accounts of the speech perception process must explain
``the invariance problem,'' where human listeners assign the same
label to a large set of different stimuli. Although many specific
proposals have been advanced, they can all be categorized into a small
set of classes. The talk will begin with a discussion of the various
classes of processes to accommodate variability. Next, an
isolated-word speech recognition system, NEXUS, will be described.
Although NEXUS is not intended as a detailed model of human speech
perception, the system bears many similarities to human linguistic
performance. Learning heuristics in NEXUS analyze the vocabulary into
an inventory of sub-word units, roughly corresponding to phonetic
segments. NEXUS can recognize that different words share
subsequences, and build word models that reflect this sharing. These
capabilities permit NEXUS to function effectively with a difficult
recognition vocabulary; the error rate was found to be only one-third
that of a state-of-the-art template-based recognition system.
Confusion matrices strongly resemble human perceptual confusions.
Time permitting, planned generalizations of NEXUS to begin on the
difficult problems of multiple speakers and connected speech will be
described.
--------------
LEXICAL PROJECT
Lexical Meaning and Valence
Mark Gawron (gawron@csli)
Monday, January 13, 10 a.m., Ventura Conference Room
The talk will focus on one version of a semantic account of
valence. Given a verb meaning, the account gives a set of possible
valences: each valence selects a subject and object from among the
verb's arguments (if any), and specifies the marking (such as a
particular preposition) for any obliques. I will then turn to some
consequences of such an account for a theory of lexical rules, and
some problems in the valence of nouns.
(This is the first meeting of the Lexical project for this quarter.
Future meetings will be on Mondays at 10 a.m. every other week.)
--------------
FIRST AFT PROJECT MEETING
Julius Moravcsik
Tuesday, January 14, 11 a.m., Ventura Conference Room
The regular meetings of the AFT (Aitiational Frame Theory) project
on lexical representation are on Tuesdays at 11 in the Ventura
Conference Room starting on January 14. For further information see
the December 5 CSLI newsletter (old newsletters are stored on CSLI in
<csli.newsletter>newsletters.txt). Please contact Julius Moravcsik
(julius@csli or (415)497-2130), if you are interested in joining the
group.
!
Page 4 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SYSTEM DEVELOPMENT LANGUAGES GROUP
Last quarter the System Development Languages Group organized a
weekly meeting at CSLI on ``environments''. This quarter it will be
replaced by a meeting on ``System description languages meet the real
world'' (hopefully a snappier name will emerge). We will look at
research in which formal description languages have been applied in
the system development process for large-scale systems (both computer
and organizational).
The emphasis will be on what has been learned about the relation
between idealized formal structures (of the kind used in
specification) and the exigencies of building, understanding and
modifying real systems. A system description language that is
actually applied is clearly a ``situated language'', in that the
correspondence between language and world is generated and enforced by
the real flow of events. Much of the failure of program specification
(both in attempted applications and in convincing the world to try)
has come from taking an overly idealized view of this correspondence,
rather than dealing in a principled way with the very real contextual
issues and lack of pre-omniscience of the specifier.
There are also complex interactions with natural language. A
person reading a specification in any language (no matter how formal)
makes use of natural language understanding as a background. As a
simple illustration, many of the identifiers are words in a natural
language (imagine reading a program or specification in which all
identifiers have been systematically replaced with meaningless
character sequences). An idealized view ignores this, concentrating
on the meaning as developed through the formal structure of
definitions. A realistic view must recognize and deal with questions
about how terms come to be used and understood within a community (the
system developers, users, etc.), and how this relates to theories of
natural language semantics.
One major focus will be the work done in Scandinavia (originated by
Nygaard) as reflected in a series of languages (Simula, Delta,
Epsilon, Florence, Beta, ...) and a series of system development
projects (DUE, UTOPIA, MARS, SYDPOL, ...). We are fortunate to have
several visitors from the universities of Oslo (Norway) and Aarhus
(Denmark) who have participated in this work. Other topics may
include work by Holt and by DeCindio et al. (using Petri-net-based
formalisms) and more popular system development methodologies (e.g.,
Jackson's) that make some use of precise descriptive languages. Once
again, we are eager to have people from the local/regional research
community attend and present relevant work.
The meetings will not start immediately, since some of the relevant
people have not yet arrived. There will be another announcement when
they are scheduled. If you have comments or suggestions for topics,
please send them to WINOGRAD@SU-CSLI.ARPA.
!
Page 5 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
RATIONAL AGENCY GROUP
Summary of Fall 1985 Work
The fall-quarter meetings of the Rational Agency Group (alias
RatAg) have focused on the question: what must the architecture of a
rational agent with serious resource limitations look like? Our
attempts to get at answers to this question have been of two kinds.
One approach has been to consider problems in providing a coherent
account of human rationality. Specifically, we have discussed a
number of philosophically motivated puzzles, such as the case of the
Double Pinball Machine, and the problem of the Strategic Bomber,
presented in a series of papers by Michael Bratman. The second
approach we have taken has been to do so-called robot psychology.
Here, we have examined existing AI planning systems, such as the PRS
system of Mike Georgeff and Amy Lansky, in an attempt to determine
whether, and, if so, how these systems embody principles of rationality.
Both approaches have led to the consideration of similar issues:
1) What primitive components must there be in an account of
rationality? From a philosophical perspective, this is
equivalent to asking what the set of primitive mental states
must be to describe human rationality; from an AI perspective,
this is equivalent to asking what the set of primitive mental
operators must be to build an artificial agent who behaves
rationally. We have agreed that the philospher's traditional
2-parameter model, containing just ``beliefs'' and ``desires'',
is insufficient; we have further agreed that adding just a third
parameter, say ``intentions'', is still not enough. We are
still considering whether a 4-parameter model, which includes a
parameter we have sometimes called ``operant desires'', is
sufficient. These so-called operant desires are medial between
intentions and desires in that, like the former (but not the
latter) they control behavior in a rational agent, but like the
latter (and not the former) they need not be mutually consistent
to satisfy the demands of rationality. The term ``goal'', we
discovered in passing, has been used at times to mean
intentions, at times desires, at times operant desires, and at
times other things; we have consequently banished it from our
collective lexicon.
2) What are ``plans'', and how do they fit into a theory of
rationality? Can they be reduced to some configuration of
other, primitive mental states, or must they also be introduced
as a primitive?
!
Page 6 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
3) What are the combinatorial properties of these primitive
components within a theory of rationality, i.e., how are they
interrelated and how do they affect or control action? We have
considered, e.g., whether a rational agent can intend something
without believing it will happen, or not intend something she
believes will inevitably happen. One set of answers to these
questions that we have considered has come from the theory of
plans and action being developed by Michael Bratman. Another
set has come come from work that Phil Cohen has been doing with
Hector Levesque, which involves explaining speech acts as a
consequence of rationality. These two theories diverge on many
points: Cohen and Levesque, for instance, are committed to the
view that if a rational agent believes something to be inevitable,
he also intends it; Bratman takes the opposite view. In recent
meetings, interesting questions have arisen about whether there
can be beliefs about the future that are `not' beliefs that
something will inevitably happen, and, if so, whether
concomitant intentions are guaranteed in a rational agent.
The RatAg group intends to begin the new quarter by considering how
Cohen and Levesque's theory can handle the philosphical problems
discussed in Bratman's work. We will also be discussing the work of
Hector-Neri Castaneda in part to explore the utility of Castaneda's
distinction between propositions and practitions for our work on
intention, belief and practical rationality. Professor Castaneda will
be giving a CSLI colloquium in the spring.
RatAg participants this quarter have been Michael Bratman (project
leader), Phil Cohen, Todd Davies, Mike Georgeff, David Israel, Kurt
Konolige, Amy Lansky, and Martha Pollack. --Martha Pollack
←←←←←←←←←←←←
COURSE UNIFICATION ANNOUNCEMENT:
Linguistics 221: Syntactic Theory II (Winter)
and
Linguistics 230: Semantics and Pragmatics (Spring)
These two courses will be taught this year as an integrated
two-quarter introduction to unification-based approaches to the
analysis of fundamental issues in natural language syntax and
semantics. The course will be concerned with developing precise
syntactic and semantic treatments of numerous theoretically important
issues, such as governed and unbounded dependency constructions,
``controlled'' complements, anaphora, quantifiers, and a variety of
agreement phenomena. The theoretical orientation will be that of
Head-Driven Phrase Structure Grammar, currently being developed by
researchers at CSLI and elsewhere, and closely related work in PATR-II
being conducted primarily at SRI International.
The course is intended primarily for first-year graduate students
in Linguistics. However, because of the emphasis on situation-based
semantics and the florescence of ongoing computational work based on
HPSG/PATR-II-style linguistic analyses, the course may be of interest
to philosophers and computational linguists as well.
Second-year graduate students in Linguistics who, because of
changes in the department's curriculum, were unable to take an
introduction to HPSG last year, may enroll for just L221 by
arrangement.
!
Page 7 CSLI Newsletter January 9, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Instructors: Carl Pollard, Mats Rooth, Ivan Sag (sag@csli)
Time: MWF: 8:45-9:55 AM
Place: 60-62L
Prerequisites: 1. Linguistics 220 or permission of the instructors
2. Knowledge of elementary set theory and predicate
logic (review sections will be offered during the
first three weeks of the course.)
←←←←←←←←←←←←
NEW CSLI REPORTS
Report No. CSLI-85-41, ``Possible-world Semantics for Autoepistemic
Logic'' by Robert C. Moore and Report No. CSLI-85-42, ``Deduction
with Many-Sorted Rewrite'' by Jose Meseguer and Joseph A. Goguen, have
just been published. These reports may be obtained by writing to
Trudy Vizmanos, CSLI, Ventura Hall, Stanford, CA 94305 or
Trudy@SU-CSLI.
-------
∂15-Jan-86 1739 EMMA@SU-CSLI.ARPA Newsletter January 16, No. 8
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 Jan 86 17:38:25 PST
Date: Wed 15 Jan 86 16:52:56-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter January 16, No. 8
To: friends@SU-CSLI.ARPA
Tel: 497-3479
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
January 16, 1986 Stanford Vol. 3, No. 8
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, January 16, 1986
12 noon TINLunch
Ventura Hall Generalized Quantifiers and Plurals
Conference Room by Godehard Link
Discussion led by Mats Rooth (Rooth@csli)
2:15 p.m. CSLI Seminar
No Seminar
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, January 23, 1986
12 noon TINLunch
Ventura Hall The Mind's New Science
Conference Room by Howard Gardner
Discussion led by Thomas Wasow (Wasow@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Computer Problem Solving Languages, Programming
Languages and Mathematics
Curtis Abbott (Abbott@xerox)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium
--------------
ANNOUNCEMENT
Please note that the seminar and colloquium are no longer in
Redwood Hall room G-19. We are trying to get a new place.
!
Page 2 CSLI Newsletter January 16, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
The Mind's New Science
by Howard Gardener
The first chapter of Howard Gardner's ``The Mind's New Science: A
History of the Cognitive Revolution'' lays out five assumptions that
he claims characterize work in Cognitive Science. Although Gardner
cites CSLI as part of the ``revolution'' he is chronicling, some of
his five assumptions would be quite controversial around here. The
questions I would like to discuss are: Is he wrong in claiming that
his assumptions are widely accepted by cognitive scientists, or is he
wrong to include CSLI in his book? If the former, what ``are'' the
shared assumptions of cognitive scientists? If the latter, what is
the relationship between cognitive science and the work we do at CSLI?
--Thomas Wasow
--------------
NEXT WEEK'S SEMINAR
Computer Problem Solving Languages,
Programming Languages and Mathematics
by the
Semantically Rational Computer Languages Group
Programming languages are constrained by the requirement that their
expressions must be capable of giving rise to behavior in an
effective, algorithmically specified way. Mathematical formalisms,
and in particular languages of logic, are not so constrained, but
their applicability is much broader than the class of problems anyone
would think of ``solving'' with computers. This suggests, and I
believe, that formal languages can be designed that are connected with
the concerns associated with solving problems with computers yet not
constrained by effectiveness in the way programming languages are. I
believe that such languages, which I call ``computer problem solving
languages,'' provide a more appropriate evolutionary path for
programming languages than the widely pursued strategy of designing
``very high level'' programming languages, and that they can be
integrated with legitimate programming concerns by use of a
transformation-oriented methodology. In this presentation, I will
give several examples of how this point of view impacts language
design, examples which arise in Membrane, a computer problem solving
language I am in the process of designing. --Curtis Abbot
!
Page 3 CSLI Newsletter January 16, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
TEA TIME DISCUSSION: CSLI WITHER?
led by Terry Winograd
Wednesday, January 22, 3:30, Ventura Lounge
To be simplistic, it is my view that the survival of CSLI as an
institution, beyond its having SDF money to disperse, depends on the
emergence of one person (or a small group) who wants to use it as
his/her/their vehicle for getting something done in the world. That
is, it has to be shaped by a particular vision that is much less
eclectic than the current institute. It cannot be a broad
interdisciplinary interaction. Decisions about what gets funded, who
gets hired, etc. have to be guided by a clear and somewhat
single-minded idea about what is important and what is worth doing.
All of the more immediate problems (people not talking to each other
enough, not enough commitment to describe their results to others,
etc.) are symptomatic of lacking a shared direction.
The obvious problem, of course, is that you can't simply wish
leadership into existence. Someone with sufficient power (both
intellectual and political) has to want to do it and be willing to put
in large amounts of time and effort toward building and developing
CSLI.
CSLI as now constituted is a rather unwieldy beast, and may be
quite difficult to shape into something more coherent. It will not be
easy, since it involves cutting out a lot of what is there now (or at
least providing benign neglect until it withers away), fighting the
post-SDF resource problem, etc.
(CSLI tea time discussions are informal talks about matters of
interest to the CSLI community.)
--------------
POSTDOCTORAL FELLOWSHIPS
The Center for the Study of Language and Information (CSLI) at
Stanford University is currently accepting applications for a small
number of one year postdoctoral fellowships commencing September 1,
1986. The awards are intended for people who have received their
Ph.D. degrees since June 1983.
Postdoctoral fellows will participate in an integrated program of
basic research on situated language---language as used by agents
situated in the world to exchange, store, and process information,
including both natural and computer languages.
For more information about CSLI's research programs and details of
postdoctoral fellowship appointments, write to:
Dr. Elizabeth Macken, Assistant Director
Center for the Study of Language and Information
Ventura Hall
Stanford University
Stanford, California 94305
APPLICATION DEADLINE: FEBRUARY 15, 1986
-------
∂22-Jan-86 1811 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Jan. 28, (Andrea diSessa,UCB)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 22 Jan 86 18:11:14 PST
Received: from ucbvax.berkeley.edu by SU-CSLI.ARPA with TCP; Wed 22 Jan 86 18:03:01-PST
Received: by ucbvax.berkeley.edu (5.44/1.7)
id AA15848; Wed, 22 Jan 86 16:48:16 PST
Received: by cogsci.berkeley.edu (5.44/5.16)
id AA28861; Wed, 22 Jan 86 16:47:34 PST
Date: Wed, 22 Jan 86 16:47:34 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8601230047.AA28861@cogsci.berkeley.edu>
To: cogsci-friends%cogsci@BERKELEY.EDU
Subject: UCB Cognitive Science Seminar--Jan. 28, (Andrea diSessa,UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, January 28, 11:00 - 12:30
[NB. New Location] 2515 Tolman Hall
Discussion: 12:30 - 1:30 [location TBA]
``Knowledge in Pieces''
Andrea A. diSessa
Math Science and Technology, School of Education
Abstract
Naive Physics concerns expectations, descriptions and
explanations about the way the physical world works that people
seem spontaneously to develop through interaction with it. A
recent upswing in interest in this area, particularly concern-
ing the relation of naive physics to the learning of school
physics, has yielded significant interesting data, but little
in the way of a theoretical foundation. I would like to pro-
vide a sketch of a developing theoretical frame together with
many examples that illustrate it.
In broad strokes, one sees a rich but rather shallow (in a
sense I will define), loosely coupled knowledge system with
elements that originate often as minimal abstractions of common
phenomena. Rather than a "change of theory" or even a shift in
content of the knowledge system, it seems that developing
understanding of classroom physics may better be described in
terms of a change in structure that includes selection and
integration of naive knowledge elements into a system that is
much less data-driven, less context dependent, more capable of
"reliable" (in a technical sense) descriptions and explana-
tions. In addition I would like to discuss some hypothetical
changes at a systematic level that do look more like changes of
theory or belief. Finally, I would like to consider the poten-
tial application of this work to other domains of knowledge,
and the relation to other perspectives on the problem of
knowledge.
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
EMST Faculty Candidate Presentation: Beth Adelson of the Artif-
icial Intelligence Lab at Yale University will speak on "Issues
in programming: a process model and some representations" on
Monday, January 27, from 1:30 to 3:00 in 2515 Tolman.
----------------------------------------------------------------
∂22-Jan-86 1823 EMMA@SU-CSLI.ARPA Newsletter January 23, No. 9
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 22 Jan 86 18:23:40 PST
Date: Wed 22 Jan 86 17:32:26-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Newsletter January 23, No. 9
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I N E W S L E T T E R
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
January 23, 1986 Stanford Vol. 3, No. 9
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, January 23, 1986
12 noon TINLunch
Ventura Hall The Mind's New Science
Conference Room by Howard Gardner
Discussion led by Thomas Wasow (Wasow@csli)
2:15 p.m. CSLI Seminar
Ventura Hall Computer Problem Solving Languages, Programming
Trailer Classroom Languages and Mathematics
Curtis Abbott (Abbott@xerox)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, January 30, 1986
12 noon TINLunch
Ventura Hall Pragmatics: An Overview
Conference Room Dan Sperber and Deirdre Wilson
Discussion led by Stephen Neale (Neale@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Term Rewriting Systems and Application to Automated
Trailer Classroom Theorem Proving and Logic Programming
Helene Kirchner (Kirchner@sri-ai)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
ANNOUNCEMENT
Until further notice, seminars and colloquia will be held in the
Trailer Classroom.
!
Page 2 CSLI Newsletter January 23, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
Pragmatics: An Overview
by Dan Sperber and Deirdre Wilson
In this paper Sperber and Wilson outline a theory of utterance
interpretation based on what they call the ``Principle of Relevance''
(P.O.R.). Although in some ways an outgrowth of Grice's Co-operative
Principle and attendent maxims, the P.O.R. is freed from social and
moral underpinnings of Grice's theory and is billed as ``a brute fact
about human psychology''. Sperber and Wilson thus aim to provide a
full-blown theory of pragmatic competence with which to actually model
the derivation of pragmatic inferences rather than provide ex post
facto explanations. The paper provides a useful overview of their
forthcoming book ``Relevance: A Study in Verbal Understanding''
(Oxford: Blackwell, Feb. 1986) --Stephen Neale
--------------
NEXT WEEK'S SEMINAR
Term Rewriting Systems and Application to
Automated Theorem Proving and Logic Programming
Helene Kirchner
Term rewriting systems are sets of rules (i.e. directed equations)
used to compute equivalent terms in an equational theory. Term
rewriting systems are required to be terminating and confluent in
order to ensure that any computation terminates and does not depend on
the choice of applied rules. Completion of term rewriting systems
consists of building, from a set of non-directed equations, a
confluent and terminating set of rules that has the same deductive
power. After a brief description of these two notions, their
application in two different domains are illustrated:
- automated theorem proving in equational and first-order
logic,
- construction of interpretors for logic programming languages
mixing relational and functional features.
--------------
LOGIC SEMINARS
Branching Generalized Quantifiers
Dag Westerstahl
Monday, January 27 and February 3, 4:15-5:30
Faculty lounge (3rd floor Mathematics)
The idea that partially ordered prefixes (branchings) of the
universal and the existential quantifiers occur in natural languages
originates with Hintikka, who in particular claimed that the Henkin
quantifier occurs essentially in English. In these talks, the notion
of branching is extended to (logics with) generalized quantifiers. It
was Barwise who in ``On branching quantifiers in English'' (J. Phil.
Logic, 1979) observed that certain non first-order quantifiers provide
an even more convincing example of proper branching in English---that
paper is the point of departure of my discussion. The first talk is
concerned with finding a uniform truth definition for sentences with
branching generalized quantifiers, and related issues such as
monotonicity constraints on quantifiers which allow branching. For
example, a generalized Henkin prefix, with four arbitrary quantifiers
(of the appropriate types), is defined. The second talk gives some
simple facts about the logical expressive power of branching.
!
Page 3 CSLI Newsletter January 23, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
TEA SUMMARY
The Ventura Lounge was packed at 3:30 on Wednesday, January 22, as
Terry Winograd led a discussion on the future of CSLI as an
institution. Terry argued that, to remain viable, CSLI would need
some strong binding force to counteract the pulls created by its
geographical, institutional, and disciplinary diversity. This
function could be served, he suggested, by money or by a common
research project with a dynamic leader. John Perry and Tom Wasow
argued that the inter-institutional, multidisciplinary projects
currently underway are sufficiently robust to resist the pulls Terry
talked about. After a lively discussion, it was generally agreed that
CSLI could continue to thrive even if its primary role were to
facilitate interactions, rather than to fund them or direct them.
CSLI will need to provide some level of resources to the research
projects (in the form of meeting space, computational resources, staff
support, etc.). The value of various types of resources and what
would be required of the CSLI community to ensure their continued
availability was then discussed.
-----------
POSTDOCTORAL FELLOWSHIPS
The Center for the Study of Language and Information (CSLI) at
Stanford University is currently accepting applications for a small
number of one year postdoctoral fellowships commencing September 1,
1986. The awards are intended for people who have received their
Ph.D. degrees since June 1983.
Postdoctoral fellows will participate in an integrated program of
basic research on situated language---language as used by agents
situated in the world to exchange, store, and process information,
including both natural and computer languages.
For more information about CSLI's research programs and details of
postdoctoral fellowship appointments, write to:
Dr. Elizabeth Macken, Assistant Director
Center for the Study of Language and Information
Ventura Hall
Stanford University
Stanford, California 94305
APPLICATION DEADLINE: FEBRUARY 15, 1986
-------
∂29-Jan-86 1647 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 4, 1986
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 29 Jan 86 16:47:34 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 29 Jan 86 16:37:20-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA18022; Wed, 29 Jan 86 16:32:59 PST
Date: Wed, 29 Jan 86 16:32:59 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8601300032.AA18022@cogsci.berkeley.edu>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Feb. 4, 1986
Cc: admin@cogsci.berkeley.edu
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, February 4, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Developmental Paths between Form and Meaning:
Crosslinguistic and Diachronic Perspectives''
Dan I. Slobin
Department of Psychology, UCB
It will be argued that children come to the task of
language acquisition equipped with four interacting mental
spaces, each with its own kind of multidimensional hierarchical
structure: (1) semantic space, containing notions that are
universally privileged for grammatical expression; (2) prag-
matic space, regulating the ways in which utterances are put to
social, interpersonal purposes; (3) morphosyntactic space,
defining grammatical forms in conjunction with processing and
organizational parameters; and (4) morphophonological space,
defining the acoustic-articulatory material of speech (or the
visual-motor material of sign). Crosslinguistic developmental
and diachronic data will be called upon to illustrate ways in
which language acquisition requires constant interaction
between these four mental spaces, each with its own internal
hierarchy of accessibility and with relations of mutual
relevance between individual elements across spaces. The dis-
cussion will focus on the problem of allomorphy and the means
used by the child to find distinct functions for varying forms
of words with common meanings. It will be shown that children
use both semantic and non-semantic factors for paradigm con-
struction, and that similar patterns can be found in historical
language change. Implications for language and cognition will
be suggested.
---------------------------------------------------------------------
UPCOMING TALKS
Feb 11: Jonas Langer, Psychology, UCB
Feb 18: Michael Silverstein, Anthropology, University of Chicago
Feb 25: Frederick Reif, Physics and EMST, School of Education, UCB
Mar 11: Carlotta Smith, Center for Advanced Study in the
Behavioral Sciences
Mar 18: John Seely Brown, Xerox PARC
Apr 1: Elisabeth Bates, Psychology, UCSD
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
EMST Faculty Candidate Presentation: Andr'e Boder of the Univer-
sity of Geneva and M.I.T will speak on "Familiar Schemes,
Problem-Solving Strategies, and the Acquisition of New
Knowledge" on Monday, February 3, from 1:30 to 3:00 in 2515
Tolman.
∂29-Jan-86 1803 EMMA@SU-CSLI.ARPA Calendar Vol. 1, No. 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 29 Jan 86 18:02:45 PST
Date: Wed 29 Jan 86 17:00:01-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar Vol. 1, No. 1
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
January 30, 1986 Stanford Vol. 1, No. 1
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, January 30, 1986
12 noon TINLunch
Ventura Hall Pragmatics: An Overview
Conference Room Dan Sperber and Deirdre Wilson
Discussion led by Stephen Neale (Neale@csli)
2:15 p.m. CSLI Seminar
Ventura Hall Term Rewriting Systems and Application to Automated
Trailer Classroom Theorem Proving and Logic Programming
Helene Kirchner (Kirchner@sri-ai)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, February 6, 1986
12 noon TINLunch
Ventura Hall The Wizards of Ling
Conference Room by Thomas Wasow
Discussion led by Mark Gawron (Gawron@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall To be announced
Trailer Classroom
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
ANNOUNCEMENT
The CSLI Newsletter has been replaced by two different publications: a
weekly calendar of public events and a monthly summary of research
progress. CSLI FOLKS will automatically receive both publications on
line. Other Newsletter subscribers will receive separate messages
about their subscriptions.
!
Page 2 CSLI Calendar January 30, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
The Wizards of Ling
by Thomas Wasow
Discussion led by Mark Gawron
In this brief note, Wasow argues forcefully that linguistics is not
a science, and indeed, like ``ballet dancing, chess,...and knitting,''
may never be. The core of his argument is that linguistics does not
exhibit any of three characteristics that a purported science ought to
exhibit: incremental progress, objective verifiability, and practical
applicability. For next week's TINLunch, we will look closely at this
argument, as well as discuss some of the broader questions in the
foundations of linguistics that it raises.
--------------
LOGIC SEMINAR
Branching Generalized Quantifiers, cont.
Dag Westerstahl
Monday, February 3, 4:15-5:30
Math. Faculty Lounge, Room 383-N
In this second talk on BGQ, I hope to say something about each of the
following topics: (i) the expressive power of logics with branching
generalized quantifiers; (ii) first-order definability of branching;
(iii) logics with branching quantifier variables; (iv) the relation
between a branching sentence and its linear versions.
-------
∂30-Jan-86 0924 EMMA@SU-CSLI.ARPA CSLI mailing lists
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 30 Jan 86 09:16:35 PST
Date: Thu 30 Jan 86 09:12:46-PST
From: csli-request
Subject: CSLI mailing lists
Sender: EMMA@SU-CSLI.ARPA
To: friends@SU-CSLI.ARPA
Reply-To: csli-request@su-csli.arpa
Tel: 497-3479
Tel: 723-3561
Please see the announcement in the CSLI calendar (which you should
have received last night).
If you received this message, you will receive both the CSLI weekly
calendar of public events and the monthly summary of research
progress. If you wish to receive only the monthly summary of research
or neither publication, please send a message to csli-request@su-csli.arpa.
-------
∂05-Feb-86 1605 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 5 Feb 86 15:56:14 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 5 Feb 86 15:46:41-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA24573; Wed, 5 Feb 86 15:40:45 PST
Date: Wed, 5 Feb 86 15:40:45 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8602052340.AA24573@cogsci.berkeley.edu>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu,
seminars@ucbvax.berkeley.edu
Subject: UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
Cc: admin@cogsci.berkeley.edu
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, February 11, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``The Origins of Logic''
Jonas Langer
Department of Psychology, UCB
I will try to show that logical cognition (1) originates
during the first year of infancy and (2) begins to be represen-
tational during the second year of infancy. This includes pro-
posing some of its initial structural features. These claims
imply that (a) a symbolic language is not necessary for the
origins of logical cognition and (b) that ordinary language is
not necessary for its initial representational development.
Supporting data will be drawn from J. Langer, The Origins of
Logic: Six to Twelve Months, Academic Press, 1980, and The Ori-
gins of Logic: One to Two Years, Academic Press, 1986.
---------------------------------------------------------------------
UPCOMING TALKS
Feb 18: Michael Silverstein, Anthropology, University of Chicago
Feb 25: Frederick Reif, Physics and EMST, School of Education, UCB
Mar 4: Curtis Hardyk, Psychology, UCB
Mar 11: Carlotta Smith, Linguistics, University of Texas (currently at the Center for Advanced Study in the Behavioral Sciences)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bj"orn Lindblom, Linguistics, University of Stockholm;
Peter MacNeilage, Linguistics, University of Texas;
Michael Studdart-Kennedy, Psychology, Queens College
(all currently at the Center for Advanced Study in the
Behavioral Sciences)
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
The Bay Area Sociolinguistics Association will meet on Saturday,
Feb. 8, at the home of Ruth Cathcart-Strong, 1105 The Alameda,
Berkeley (415) 525-8616. Informal talks will be given by:
Ruth Cathcart-Strong (MIIS) & Allison Heisch (SJSU), "Contrastive
Discourse: Crosscultural Approaches to Writing";
Denise Murray (SJSU),"The Web of Communication";
Wally Chafe (UCB), "Follow-up to the Pear Stories"
John Haviland will be giving a lecture and video illustration on
``Complex Referential Gestures in Gwguyumidhir Story-Telling''
at the Anthropology Department Seminar on Monday, February 10,
3:00-5:00pm in 160 Kroeber. John Haviland has been working with
natural conversations among Australian aborigines and Tzotzil-speaking
Mexican Indians for number of years. In this paper he will seek to show
how a variety of referential systems in language and gesture interact
to produce narrative that draws on linguistic knowledge, biographical
knowledge and indexical features of speech events.
Ruth A. Berman of the Linguistics Department at Tel-Aviv University
will be giving a talk entitled "Between Syntax and the Lexicon: Noun
Compounding in Hebrew" at the Linguistics Group Meeting on Tuesday,
Feb. 11 at 8:00 p.m. in room 117 Dwinelle Hall, Campus.
∂05-Feb-86 2018 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 5 Feb 86 20:09:09 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 5 Feb 86 15:46:41-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA24573; Wed, 5 Feb 86 15:40:45 PST
Date: Wed, 5 Feb 86 15:40:45 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8602052340.AA24573@cogsci.berkeley.edu>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu,
seminars@ucbvax.berkeley.edu
Subject: UCB Cognitive Science Seminar--Feb. 11 (Jonas Langer)
Cc: admin@cogsci.berkeley.edu
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, February 11, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``The Origins of Logic''
Jonas Langer
Department of Psychology, UCB
I will try to show that logical cognition (1) originates
during the first year of infancy and (2) begins to be represen-
tational during the second year of infancy. This includes pro-
posing some of its initial structural features. These claims
imply that (a) a symbolic language is not necessary for the
origins of logical cognition and (b) that ordinary language is
not necessary for its initial representational development.
Supporting data will be drawn from J. Langer, The Origins of
Logic: Six to Twelve Months, Academic Press, 1980, and The Ori-
gins of Logic: One to Two Years, Academic Press, 1986.
---------------------------------------------------------------------
UPCOMING TALKS
Feb 18: Michael Silverstein, Anthropology, University of Chicago
Feb 25: Frederick Reif, Physics and EMST, School of Education, UCB
Mar 4: Curtis Hardyk, Psychology, UCB
Mar 11: Carlotta Smith, Linguistics, University of Texas (currently at the Center for Advanced Study in the Behavioral Sciences)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bj"orn Lindblom, Linguistics, University of Stockholm;
Peter MacNeilage, Linguistics, University of Texas;
Michael Studdart-Kennedy, Psychology, Queens College
(all currently at the Center for Advanced Study in the
Behavioral Sciences)
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
The Bay Area Sociolinguistics Association will meet on Saturday,
Feb. 8, at the home of Ruth Cathcart-Strong, 1105 The Alameda,
Berkeley (415) 525-8616. Informal talks will be given by:
Ruth Cathcart-Strong (MIIS) & Allison Heisch (SJSU), "Contrastive
Discourse: Crosscultural Approaches to Writing";
Denise Murray (SJSU),"The Web of Communication";
Wally Chafe (UCB), "Follow-up to the Pear Stories"
John Haviland will be giving a lecture and video illustration on
``Complex Referential Gestures in Gwguyumidhir Story-Telling''
at the Anthropology Department Seminar on Monday, February 10,
3:00-5:00pm in 160 Kroeber. John Haviland has been working with
natural conversations among Australian aborigines and Tzotzil-speaking
Mexican Indians for number of years. In this paper he will seek to show
how a variety of referential systems in language and gesture interact
to produce narrative that draws on linguistic knowledge, biographical
knowledge and indexical features of speech events.
Ruth A. Berman of the Linguistics Department at Tel-Aviv University
will be giving a talk entitled "Between Syntax and the Lexicon: Noun
Compounding in Hebrew" at the Linguistics Group Meeting on Tuesday,
Feb. 11 at 8:00 p.m. in room 117 Dwinelle Hall, Campus.
∂06-Feb-86 0829 EMMA@SU-CSLI.ARPA Calendar February 6, No. 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 6 Feb 86 08:29:49 PST
Date: Thu 6 Feb 86 08:25:19-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar February 6, No. 2
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
February 6, 1986 Stanford Vol. 1, No. 2
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, February 6, 1986
12 noon TINLunch
Ventura Hall The Wizards of Ling
Conference Room by Thomas Wasow
Discussion led by Mark Gawron (Gawron@csli)
2:15 p.m. CSLI Seminar
Ventura Hall No seminar
Trailer Classroom
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, February 6, 1986
12 noon TINLunch
Ventura Hall No TINLunch
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall To be announced
Trailer Classroom
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
!
Page 2 CSLI Calendar February 6, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SYSTEM DESCRIPTION AND DEVELOPMENT MEETING
Next week (Monday, February 10) we will begin the System
Description and Development Meetings that were described in the
Newsletter in January (a copy of the initial description is available
in <winograd>SDDM.TXT on CSLI and SCORE). The meetings will be on
Mondays at noon in the Ventura Trailer Classroom. The first speaker
will be Jens Kaasboll, a visitor to CSLI from the University of Oslo.
He has been working on the FLORENCE project, in which precise system
description languages (not programming languages) are being designed
to serve in the development of informatics systems for use by nurses
in hospitals. FLORENCE is part of a larger project on System
Development and Profession Oriented Languages (SYDPOL) which includes
projects in Norway, Denmark, and Sweden. There will be no meeting on
February 17 (Presidents day Holiday). They will resume on the 24th.
Future talks will include Kristen Nygaard (originator of the SYDPOL
project) on March 3. Suggestions for other speakers are welcome (send
to WINOGRAD@SU-CSLI.ARPA).
Abstract for February 10:
Intentional Development of Professional Language through System
Development: A Case Study and Some Theoretical Considerations
Jens Kaasboll, University of Oslo
Monday, February 10, 12:00
In order to develop informatics oriented languages for nurses,
various techniques have been employed, including system description
with nurses and observation of nurses at work. Observation revealed
unformalizable parts of the work, while these parts did not show up at
the system descriptions. The system description process, however,
triggered reflection among the nurses.
Nurses' use of language differs from common language in concepts
and intentions. Knowing parts of their language helps avoiding
confusions and guiding the functionality of computer systems.
Extending the professional language of nurses with concepts for
dealing with information processing was partly unpredictable.
Knowledge and concepts teached were reflected by the nurses' use of
more concrete terms. During the system description, the nurses coined
new symbols suited for their work.
--------------
LOGIC SEMINAR
Logics with Transitive Closures and Fixpoint Operants
Haim Gaifman, Hebrew University, visiting SRI and Stanford
Monday, February 10, 4:15-5:30
Math. Faculty Lounge, Room 383-N
-------
∂06-Feb-86 0842 EMMA@SU-CSLI.ARPA Correction Calendar
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 6 Feb 86 08:41:28 PST
Date: Thu 6 Feb 86 08:32:02-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Correction Calendar
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
Next Thursday's activities (if we have any) will be on February
13 not on February 6 as stated in the Calendar.
Emma Pease
-------
∂12-Feb-86 1045 @SU-CSLI.ARPA:BrianSmith.pa@Xerox.COM CPSR Annual Meeting: March 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 10:45:50 PST
Received: from Xerox.COM by SU-CSLI.ARPA with TCP; Wed 12 Feb 86 10:37:46-PST
Received: from Cabernet.ms by ArpaGateway.ms ; 12 FEB 86 10:35:22 PST
Date: 12 Feb 86 10:24 PST
From: BrianSmith.pa@Xerox.COM
Subject: CPSR Annual Meeting: March 1
To: Friends@SU-CSLI.ARPA
cc: BrianSmith.pa@Xerox.COM
Reply-to: BrianSmith.pa@Xerox.COM
Message-ID: <860212-103522-1327@Xerox>
You are all invited to attend the Annual Meeting of Computer
Professionals for Social Responsibility (CPSR), to be held on Saturday,
March 1, 1986. The meeting will consist of a day-long program on
important social issues in computation, followed by an evening banquet
featuring Dr. Herbert Abrams.
Day Program:
10:00 -- Noon Issues Forum (details below)
Noon -- 2:00 Lunch
2:00 -- 4:00 The Direction and Future of CPSR
4:30 -- 6:00 Ad Hoc Workshops on Issues of Interest, and
a short meeting the CPSR Board of Directors.
Place: Redwood Hall at Stanford University (across the street from
Ventura Hall, the Stanford site of CSLI, at the corner of
Campus Drive and Panama St., near the medical school). The
Ad Hoc Workshops will be at Stanford CSLI in Ventura Hall.
Evening Banquet:
7:00 -- 10:00 Camino Ballroom, Rickey's Hyatt, 4219 El Camino,
Palo Alto.
Featured Speaker: Dr. Herbert Abrams, speaking on "The Problem of
Accidental or Inadvertant Nuclear War"
(Dr. Abrams is a founder of PSR and IPPNW, the
1985
recipient of the Nobel Peace Price).
Registration fee for the full day program and banquet is $25: $5 for
the day program, which includes a sandwich lunch; $20 for the banquet
dinner. You may sign up for both parts, or either, as you like.
For more information: Call CPSR at (415) 322-3778.
Registration tickets: CPSR, at the above number, or
Stanford: Susan Stucky (723-3301, Ventura Hall)
Terry Winograd (723-2780, Margaret
Jacks)
Xerox PARC: Brian Smith (494-4336, Room 1656)
Denise Pawson (494-4303, Room 1656A)
Forum on CPSR Issues (10:00 a.m. -- Noon)
-----------------------------------------
1. "The Constitutionality of Automatic Launch of Nuclear Weapons"
-- Clifford Johnson, Plaintiff, Johnson vs. Weinberger. Manager,
Stanford University Information Technology Services.
2. "The Computer Aspects of the Strategic Defense Initiative"
-- Dave Redell, Digital Equipment Corporation Systems Research
Center.
3. "Artificial Intelligence and the Law"
-- Sysan Nycum, Attorney, Gaston, Snow & Ely Bartlett, Palo Alto.
4. "Computers and Civil Liberties"
-- Marc Rotenberg, Student at Stanford Law School, former
President
of the Public Interest Computing Association (PICA).
5. "A Feminist Perspective on Computer Technology"
-- Deborah Estrin, Assistant Professor of Computer Science,
University of Southern California, and
-- Lucy Suchman, Xerox Palo Alto Research Center
∂12-Feb-86 1408 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU Berkeley Linguistics Society's 12th Annual Meeting
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 14:08:09 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 12 Feb 86 13:49:42-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA29390; Wed, 12 Feb 86 13:44:58 PST
Date: Wed, 12 Feb 86 13:44:58 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8602122144.AA29390@cogsci.berkeley.edu>
To: cogsci-friends@cogsci.berkeley.edu
Subject: Berkeley Linguistics Society's 12th Annual Meeting
The Twelfth Annual Meeting of the Berkeley Linguistics Society
February 15-17, 1986
Final Schedule
SATURDAY, February 15, 1986 (60 Evans)
MORNING SESSION: GENERAL
9:00 Martha Macri, UCB, "Polyadicity of Three Verbs Associated
with Blood-Letting Rituals in Western Glyphic Maya "
9:30 Dawn Bates, University of Washington, "An Analysis of
Lushootseed Diminutive Reduplication"
10:00 Arnold Zwicky, Ohio State University and Stanford University,
"The General Case : Basic Form versus Default Form"
10:30 BREAK
10:50 Deborah Tannen, Georgetown University, "Folk Formality"
11:30 Cheryl Ramsey Garcia, "Sex and the Question : Terminal Contours
of Responses by Women and Men"
12:00 Mark Gawron, Stanford University, "Clefts, Discourse Representations,
and Situation Semantics"
12:30 LUNCH BREAK
AFTERNOON SESSION: PARASESSION
2:00 John Hawkins, USC, "A Semantic Typology Derived from Variation
in Germanic"
2:30 Carol Genetti, University of Oregon, "The Development of
Subordinators from Postpositions in Bodic Languages"
3:00 Zygmunt Frajzyngier, University of Colorado, "From Preposition
to Copula"
3:30 BREAK
3:50 Cynthia Welsh, University of Chicago, "Is the Compositionality
Principle a Semantic Universal ?"
4:20 George Lakoff, UCB, and Claudia Brugman, UCB, "Methods of
Semantic Argumentation : Polysemy as a Major Source of Evidence"
4:50 Eve Sweetser, UCB, "Polysemy vs. Abstraction: Mutually Exclusive
or Complementary ? "
5:20 DINNER BREAK
EVENING SESSION: GENERAL
7:00 Charles Li, UCSB, "The Rise and Fall of Tonal Systems"
7:40 Amy Dahlstrom, UCB, "Weak Crossover and Obviation"
8:10 Marianne Mithun, SUNY-Albany, "When Zero Isn't There"
8:50 Janine Scancarelli, UCLA / UCSB, "Pragmatic Roles in Cherokee
Grammar"
9:20 PARTY (Stephens Hall Lounge)
SUNDAY, February 16, 1986 (2003 Life Sciences Building)
MORNING SESSION: GENERAL
9:00 Charles Fillmore, UCB, "Pragmatically Controlled Zero Anaphora"
9:30 Wayne Cowart, Ohio State University, "Evidence for a Strictly
Sentence-internal Antecedent Finding Mechanism"
10:00 Linda Schwartz, Indiana University, "Levels of Grammatical Relations
and Characterizing Reflexive Antecedents in Russian"
10:30 BREAK
LATE MORNING : PARASESSION
10:50 Michael Silverstein, University of Chicago, "Classifiers, Verb
Classifiers, and Verbal Categories"
11:30 Judy Kegl, Northeastern University, and Sara Schley, Northeastern
University, "When is a Classifier not a Classifier ? "
12:00 David Dowty, Ohio State University, "Thematic Roles and Semantics"
12:40 LUNCH BREAK
AFTERNOON SESSION: GENERAL
2:00 Nancy Dorian, Bryn Mawr College, "Abrupt Transmission Failure
in Obsolescing Languages; How Sudden the `Tip' to the Dominant
Language in Communities and Families ? "
2:40 Kathie Carpenter, Stanford University, "Productivity and Pragmatics
of Thai Numeral Classifiers"
3:10 Linda Thornburg, CSU-Fresno, "The Development of the Indirect
Passive in English"
3:40 BREAK
4:00 Stephen Wilson, UCB, "Metrical Structure in Wakashan Phonology"
4:30 Rachelle Waksler, UCSC/ Harvard University, "CV- versus X-Notation :
A Formal Comparison"
5:00 Michael Dobrovolsky, University of Calgary, "Stress and Vowel
Harmony Domains in Turkish"
5:30 DINNER BREAK
EVENING SESSION: PARASESSION
7:30 Ronald Schaefer, University of Kansas, "On Reference Objects in
Emai Path Expressions"
8:00 Leonard Talmy, UCB, "Linguistic Determiners of Perspective and
Attention"
8:40 Claudia Brugman, UCB, and Monica Macaulay, UCB, "Interacting
Semantic Systems : Mixtec Expressions of Location"
MONDAY, February 17, 1986 (2003 Life Sciences Building)
MORNING SESSION: GENERAL
9:00 Suzanne Fleischman, UCB, "Overlay Structures in the ` Song of
Roland ' : a Grounding Strategy of Oral Narrative"
9:30 Wallace Chafe, UCB, "Academic Speaking"
10:10 Geoffrey Nathan, Southern Illinois University, "Phonemes as
Mental Categories"
10:40 BREAK
11:00 Jeri Jeager, UC Davis, "On the Acquisition of the Vowel Shift
Rule"
11:30 William Eilfort, University of Chicago, "Non-finite Clauses in
Creoles"
12:00 Jack Hoeksema, Ohio State University, "Some Theoretical Consequences
of Dutch Complementizer Agreement"
12:30 LUNCH BREAK
AFTERNOON SESSION: PARASESSION
2:00 Sandra Thompson, UCLA, "A Discourse Approach to the Cross-Linguistic
Category of `Adjective' "
2:40 Mark Durie, UCLA, "The Grammaticization of Number"
3:10 Elizabeth Closs Traugott, Stanford University, "From Polysemy
to Internal Semantic Reconstruction"
3:50 BREAK
4:10 Eric Pederson, UCB, "Intensive and Expressive Language in White
Hmong"
4:40 Ronald Langacker, UCSD, "Abstract Motion"
5:20 Justine Cassell, University of Chicago, and Robert Chametzky,
University of Chicago, "A la Recherche du Temps de Verbe Perdu :
Semantic Bootstrapping and the Acquisition of the Future Tense"
∂12-Feb-86 1748 EMMA@SU-CSLI.ARPA Calendar February 13, No. 3
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 17:47:44 PST
Date: Wed 12 Feb 86 17:40:37-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar February 13, No. 3
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
February 13, 1986 Stanford Vol. 1, No. 3
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, February 13, 1986
12 noon TINLunch
Ventura Hall No TINLunch
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall No seminar
Trailer Classroom
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No colloquium
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, February 20, 1986
12 noon TINLunch
Ventura Hall Cresswell's Got a Real Attitude Problem
Conference Room Discussion led by David Israel, SRI and CSLI
(israel@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Lexical Rules and Lexical Representation
Trailer Classroom Mark Gawron (Gawron@csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
The Quest for Inheritance and Polymorphism
Luca Cardelli, DEC Stanford Research Center
--------------
ANNOUNCEMENT
An online, up-to-date calendar of CSLI events is available on SU-CSLI
in <csli>calendar.
!
Page 2 CSLI Calendar February 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
TINLUNCH ABSTRACT
Cresswell's Got a Real Attitude Problem
M. J. Cresswell has been working on the problems of the semantics
of the propositional attitudes for lo, these many years. He has
canvassed a large and bewildering array of options and has experimented
with more than a few. He now thinks he has the problem licked. Alas,
he doesn't. --David Israel
--------------
CSLI TALK
On the Semantic Content of the Notion `Thematic Role'
David Dowty, Ohio State
Tuesday, February 18, 12:00
Ventura Trailer Classroom
--------------
PIXELS AND PREDICATES
Idiosyncratic Diagrams
Kristina Hooper, Apple
1:00 pm, Wednesday, February 19, CSLI trailers
As we try to develop visual programming languages, we often rely on
our intuitions about ``humans use of visuals'', be the humans us or
them, me or you. Basically we seem to assert that people are great
with visuals, so we should do things visually.
In our enthusiasm we often forget that any complete visual language
must include both an input and an output phase, and that the current
state of most human's visual output capabilities is extremely limited.
Of course we can argue that people's general lack of manual
dexterity accounts for visual output difficulties, and that better
tools can assist them. But is this the case? Is there a deeper issue
revolving around conceptual representation that is contaminating our
communicativeness?
In an attempt to deal with this bothersome set of questions
somewhat systematically I once collected a huge number of diagrams to
see how people generated them. Putting aside for the moment the
incredible difficulties inherent in analyzing these systematically, I
was extremely astonished and impressed overall at the incredible
variation in diagrams, where it was the "difference" rather than the
goodness or badness that impressed me.
In this talk I will show you some of my collected diagrams, and
give you the benefit of my insights on these. My hope is that your
insights will add to mine, and that this might provide a start to
studying actual (as opposed to imaginary or wished for) visual
communication abilities. For though we all should plan on developing
new and powerful classes of visual communication, we will do well to
also examine such communication as it is now practiced.
!
Page 3 CSLI Calendar February 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SYSTEM DESCRIPTION AND DEVELOPMENT MEETING
12:00, Monday February 24, Ventura trailer classroom
At last week's meeting Jens Kaasboll described his research on
developing systematic descriptions of the work and communication
patterns in a nursing situation in a hospital in Oslo. At our next
meeting on February 24 (no meeting Monday the 17th---Presidents' day
holiday) we will start from his papers and analyze the situation in
terms of linguistic theories being developed here. In particular we
want to look at the semantics of the interactions in terms of the
articulation of the domains of action (as they emerge in anticipation
of potential breakdowns) and the pragmatics in terms of interlinked
conversations for action. The session will be in a discussion and
workshop style. Relevant readings are two papers by Kaasboll
(entitled ``Intentional Development of Professional Language through
Computerization'' and ``Observation of People Working with
Information: A Case Study'') available in Room 238, computer science
dept., and parts (especially Chapter 12) of the book by Winograd and
Flores, ``Understanding Computers and Cognition'' just published by
Ablex.
--------------
SEMINAR SCHEDULE
Lexical Rules and Lexical Representations
Mark Gawron, Paul Kiparsky, Annie Zaenen
February 20, 27, and March 6
This series of talks reflects the ongoing elaboration of a model of
lexical representation. In the first, Mark Gawron will discuss a
frame-based lexical semantics and its relationship to a theory of
lexical rules. In the second, Paul Kiparsky will propose a theory of
the linking of thematic roles to their syntactic realizations,
emphasizing its interactions with a theory of morphology; and in the
third, a sub-workgroup of the lexical project will sketch a unification
based representation for the interaction of the different components
of the lexical representation and both syntax and sentence semantics.
The Structural Meaning of Clause Type: Capturing Cross-modal
and Cross-linguistic Generalizations
Dietmar Zaefferer
March 20
Dietmar Zaefferer will discuss the structure and meaning of
declarative, interrogative, imperative, exclamative, and other clause
types in a number of typologically different languages.
!
Page 4 CSLI Calendar February 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Reflexivisation:
Some Connections Between
Lexical, Syntactic, and Semantic Representation
Annie Zaenen, Peter Sells, Draga Zec
March 27
This presentation will concentrate on cross-linguistic variation in
the expression of simple direct object reflexivisation (as found in
English in a sentence like `John washed himself'). It will be shown
that the counterparts of such sentences in different languages can be
lexically transitive or intransitive, can be expressed in one word or
in two or three, and allow for one or more semantic interpretations
requiring semantic representations that treat the reflexive as a bound
variable in some cases but not in others. The data presented will show
that some simple ideas about the mapping from lexical arguments to
surface structure constituents and/or to semantic arguments are not
tenable.
Representation
Brian Smith, Jon Barwise, John Etchemendy, Ken Olson, John Perry
April 3, 10, 17, and 24
Issues of representation permeate CSLI research, often in implicit
ways. This four-part series will examine representation as a subject
matter in its own right, and will explore various representational
issues that relate to mind, computation, and semantics.
Visual Communication
Sandy Pentland, Fred Lakin, Guest Speakers
May 1, 8, and 15
Speakers in this series will discuss and illustrate ongoing research
concerned with mechanisms of visual communication and visual languages
and the identification of visual regularities that support the
distinctions and classes necessary to general-purpose reasoning. Alex
Pentland will discuss how organizational regularities in human
perception can be used to facilitate a rational computer system for
3-D graphics modelling. Fred Lakin will describe a Visual
Communication Lab, and, in particular, a project to construct visual
grammars for visual languages. Examples show the use of these
grammars to recognize and parse ``blackboard'' diagrams.
Events and Modes of Representing Change
Carol Cleland
May 22
!
Page 5 CSLI Calendar February 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
COLLOQUIUM PREVIEW
February 20: Luca Cardelli, ``The Quest for Inheritance and
Polymorphism''
February 27: Haim Gaifman, ``Logic of Pointers and Evaluations---The
Solution to the Self-referential Paradoxes''
March 6: Bill Rounds
March 13: Raymond Smullyan
April 17: Hector-Neri Castaneda
-------
∂12-Feb-86 1758 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 18 (Michael Silverstein)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 17:58:35 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 12 Feb 86 17:44:55-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA29580; Wed, 12 Feb 86 14:20:45 PST
Date: Wed, 12 Feb 86 14:20:45 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8602122220.AA29580@cogsci.berkeley.edu>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Feb. 18 (Michael Silverstein)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, February 18, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Tense, aspect, and the functional componentialization of events in language''
Michael Silverstein
Department of Anthropology, University of Chicago
Linguistic categories of tense, aspect, "relative tense,"
Aktionsart, predicate perspective, etc., differently coded in
languages, have distinct potentials for denoting (representing)
the characteristics of predicated events, depending on the
specific configuration of categories differentially operative
in any language, their markedness relations, and what I term
the "metapragmatic" content of the categories implemented in
the utterance/communication act---the denotational coding of
the very components of the communicative event as indexed by
it. Variation along these dimensions generates, it would seem,
the apparent complexity of representational content, and yields
a kind of topology of `eventhood' that, in our culture for
example, is consciously objectified and reconstructed as
"time," though it need not be.
---------------------------------------------------------------
UPCOMING TALKS
Feb 25: Frederick Reif, Physics and EMST, Education, UCB
Mar 4: Curtis Hardyck, Education and Psychology, UCB
Mar 11: Carlota Smith, Linguistics, University of Texas
(currently at the Center for Advanced Study in the
Behavioral Sciences)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bjorn Lindblom, Linguistics, University of Stock-
holm; Peter MacNeilage, Linguistics, University of
Texas; Michael Studdart-Kennedy, Psychology, Queens
College (all currently at the Center for Advanced
Study in the Behavioral Sciences)
---------------------------------------------------------------
ELSEWHERE ON CAMPUS
Twelfth Annual Meeting of the Berkeley Linguistics Society,
Feb. 15-17:
Saturday, 2/15 in 60 Evans: 9:00-12:30; 2:00-5:20; 7:00-9:20
Sunday, 2/16 in 2003 Life Sciences Bldg.: 9:00-12:40; 2:00-
5:30; 7:30-9:10
Monday, 2/17 in 2003 Life Sciences Bldg.: 9:00-12:30; 2:00-6:00
(The schedule will be sent out via electronic mail; hard-copy
schedules are available in the Linguistics Dept., 2337 Dwinelle,
642-2757.)
∂13-Feb-86 1956 @SU-CSLI.ARPA:Zaenen.pa@Xerox.COM David Dowty's talk
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Feb 86 19:55:59 PST
Received: from Xerox.COM by SU-CSLI.ARPA with TCP; Thu 13 Feb 86 19:46:11-PST
Received: from Cabernet.ms by ArpaGateway.ms ; 13 FEB 86 14:32:09 PST
Date: 13 Feb 86 14:31 PST
From: Zaenen.pa@Xerox.COM
Subject: David Dowty's talk
To: friends@SU-CSLI.ARPA
Message-ID: <860213-143209-1763@Xerox>
I got this abstract for D.Dowty's talk too late to put it in the
calendar, so here it is by separate mail:
ON THE SEMANTIC CONTENT OF THE NOTION 'THEMATIC ROLE'
Thematic Roles had never been employed in formalized, model-theoretic
work until the recent proposals by Gennaro Chierchia (in his 1984
dissertation) and by Greg Carlson (in a forthcoming paper in
'Linguistics'). The present paper will try to raise some fundamental
questions not treated in these other two proposals as well as respond
to and build on them. The first task is to try to figure out how a
theory of thematic roles can be genuinely distinguishable from the way
n-place predicates and their arguments are interpreted in standard
predicate logic and its model theory. It is suggested that this can
be done by treating "thematic roles" in the standard approach as
clusters of entailments with respect to various arguments of verbs,
then putting constraints on these entailments, but it is argued that a
more revealing method is the neo-Davidsonian one of taking verbs as
one-place predicates of events and thematic roles as relations between
events (taken as primitives) and their participants. The hypothesis
is then put forward that arguments of event-nominals ("Mary's
dismissal of John", etc.) may be interpreted via a thematic-role
theory of this sort, while subcategorized arguments of verbs are
interpreted via the standard approach (verbs denote n-place
relations). The paper closes with some speculation as to the purpose
thematic roles may serve in the acquisition of language and in the
preliminary (but not final) individuation and categorization of
events.
---David Dowty
The talk is this coming tuesday at noon in the trailer class room
∂19-Feb-86 1725 EMMA@SU-CSLI.ARPA Calendar February 20, No. 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 19 Feb 86 17:25:48 PST
Date: Wed 19 Feb 86 17:20:04-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar February 20, No. 4
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
February 20, 1986 Stanford Vol. 1, No. 4
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, February 20, 1986
12 noon TINLunch
Ventura Hall Cresswell's Got a Real Attitude Problem
Conference Room Discussion led by David Israel, (israel@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall Lexical Representation and Lexical Rules
Trailer Classroom Mark Gawron (Gawron@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall The Quest for Inheritance and Polymorphism
Trailer Classroom Luca Cardelli, Digital Systems Research Center
(Abstract on page 2)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, February 27, 1986
12 noon TINLunch
Ventura Hall The Aspectual Effect of Mass Term and
Conference Room Bare Plural Arguments
by Erhard Hinrichs
Discussion led by Godehard Link (Link@su-csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Lexical Representation and Lexical Rules
Trailer Classroom Paul Kiparsky (Kiparsky@su-csli)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Logic of Pointers and Evaluations---
The Solution to the Self-referential Paradoxes
Haim Gaifman, Hebrew University
--------------
!
Page 2 CSLI Calendar February 20, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S COLLOQUIUM
The Quest for Inheritance and Polymorphism
Luca Cardelli, Digital Systems Research Center
Inheritance and polymorphism are two central concepts in
programming languages, with the common purpose of increasing program
flexibility and reusability. They can be understood and used in
untyped languages, but their utility is more apparent in typed
languages.
Our ideas about inheritance and polymorphism have been evolving
rapidly in the past few years, and we start understanding mechanisms
by which these concepts can be generalized and unified.
This talk will explain why, in the context of typed languages, an
extensive treatment of (multiple) inheritance requires polymorphism.
A notation is presented which accounts for a wide range of phenomena
in object-oriented, functional and system-modeling languages.
--------------
NEXT WEEK'S TINLUNCH
The Aspectual Effect of Mass Term and Bare Plural Arguments
by Erhard Hinrichs
Discussion led by Godehard Link
This is the last section of the author's dissertation ``A
Compositional Semantics for Aktionsarten and NP Reference in English''
(Ohio State 1985) in which a tripartite Carlson style ontology for
events is developed and applied to the analysis of aspects in
English. In the present section, a compositional semantics of the
influence of mass term and plural arguments on the aspectual class of
the VP (accomplishment vs. activity) is offered, as in
(1) John ate (a cake)/cake/cakes.
To start the discussion, I will briefly summarize the basic ideas in
the rest of the dissertation as far as they bear on the issue at hand.
The complete fragment of English that the author provides is included
in the handout.
--------------
NEXT WEEK'S SEMINAR
Lexical Rules and Lexical Representations
Mark Gawron, Paul Kiparsky, Annie Zaenen
February 20, 27, and March 6
This is the second of a series of talks reflecting the ongoing
elaboration of a model of lexical representation. In the first, Mark
Gawron discussed a frame-based lexical semantics and its relationship
to a theory of lexical rules. In this one, Paul Kiparsky will propose
a theory of the linking of thematic roles to their syntactic
realizations, emphasizing its interactions with a theory of
morphology; and in the third, a sub-workgroup of the lexical project
will sketch a unification based representation for the interaction of
the different components of the lexical representation and both syntax
and sentence semantics.
!
Page 3 CSLI Calendar February 20, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AFA MEETING
Friday, February 21, 2:00-3:30
Ventura Conference Room
The informal group studying Peter Aczel's set theory with the Anti-
Foundation Axiom (AFA) will resume its meetings. On Friday, Jon
Barwise and John Etchemendy will start working through their draft
monograph on self-reference, where they use AFA to model various
approaches to self-referring propositions in an attempt to understand
Liar-like paradoxes. The group will meet on alternate Fridays.
--------------
PIXELS AND PREDICATES
Principles of Graphical User-Interface Design
Bill Verplank, Xerox
1:00 pm, Wednesday, February 26, CSLI trailers
User-interfaces are becoming increasingly graphical with windows,
icons, pup-up menus, what-you-see-is-what-you-get, etc. I believe
that one key to success with these new user interfaces is good graphic
design. It's a new kind of graphics: ``graphics with handles''.
From my experience with the Xerox Star user interface, these seems
to be the critical graphical challenges:
---to create the illusion of manipulable objects
---to reveal hidden structure
---to establish a consistent graphic vocabulary
---to match the medium
---to provide visual order and user focus
--------------
LOGIC SEMINAR
A Logic Characterized by the Class of Linear Kripke Models
with Nested Domains
Giovanna Corsi, University of Florence
4:15, Monday, February 24, Math Faculty Lounge
--------------
SYSTEM DESCRIPTION AND DEVELOPMENT MEETING
12:00, Monday February 24, Ventura trailer classroom
(Abstract in last week's newsletter)
--------------
FUTURE COLLOQUIA
Logic of Pointers and Evaluations---
The Solution to the Self-referential Paradoxes
Haim Gaifman,
Department of Mathematics, Hebrew University, Israel
February 27
``Logical Specifications for
Feature Structures in Unification Grammars''
William C. Rounds and Robert Kasper
University of Michigan
March 6
``Self Reference and Self Consciousness''
Raymond Smullyan,
Department of Philosophy, Indiana University
March 13
!
Page 4 CSLI Calendar February 20, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Talk on Practical Reasoning
Hector-Neri Castaneda,
Department of Philosophy, Indiana University
April 17
--------------
FUTURE SEMINARS
Lexical Rules and Lexical Representations
Mark Gawron, Paul Kiparsky, Annie Zaenen
February 27 and March 6
Phil Cohen
March 13
The Structural Meaning of Clause Type: Capturing Cross-modal
and Cross-linguistic Generalizations
Dietmar Zaefferer
March 20
Reflexivisation:
Some Connections Between
Lexical, Syntactic, and Semantic Representation
Annie Zaenen, Peter Sells, Draga Zec
March 27
Representation
Brian Smith, Jon Barwise, John Etchemendy, Ken Olson, John Perry
April 3, 10, 17, and 24
Visual Communication
Sandy Pentland, Fred Lakin, Guest Speakers
May 1, 8, and 15
Events and Modes of Representing Change
Carol Cleland
May 22
Why Language isn't Information
Terry Winograd
May 29
Ivan Blair
June 5
Numbers, Relations, and Situations
Chris Menzel
June 12
!
Page 5 CSLI Calendar February 20, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEW CSLI REPORTS
Report No. CSLI-85-34, ``Applicability of Indexed Grammars to
Natural Languages'' by Gerald Gazdar, Report No. CSLI-85-39, ``The
Structures of Discourse Structure'' by Barbara Grosz and Candace L.
Sidner, and Report No. CSLI-85-44, ``Language, Mind, and Information''
by John Perry, have just been published. These reports may be
obtained by writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford,
CA 94305 or Trudy@SU-CSLI.
-------
∂20-Feb-86 1527 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--Feb. 25 (F. Reif)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 20 Feb 86 15:26:58 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Thu 20 Feb 86 15:17:50-PST
Received: by cogsci.berkeley.edu (5.44/1.9)
id AA00396; Thu, 20 Feb 86 15:17:34 PST
Date: Thu, 20 Feb 86 15:17:34 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8602202317.AA00396@cogsci.berkeley.edu>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--Feb. 25 (F. Reif)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, February 25, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Interpretation of Scientific and Mathematical Concepts:
Cognitive Issues and Instructional Implications''
F. Reif
Department of Physics and School of Education,
University of California at Berkeley
Scientific and mathematical concepts are significantly dif-
ferent from everyday concepts and are notoriously difficult to
learn. A cognitive analysis shows that the values of scien-
tific concepts can be identified or found by several different
modes of concept interpretation. Some of these modes use for-
mally explicit knowledge and thought processes; others rely
more on various kinds of compiled knowledge. Each mode has
distinctive consequences in terms of attainable precision,
likely errors, and ease of use. An attempt is made to formu-
late an "ideal" model of scientific concept interpretation;
such a model uses a combination of modes to interpret concepts
in manner that achieves reliable scientific effectiveness as
well as processing efficiency. This model can be compared with
the actual concept interpretations of expert scientists or
novice students. All these remarks can be well illustrated in
the specific case of the physics concept "acceleration". The
preceding discussion helps reveal both cognitive and metacogni-
tive reasons why the learning of scientific or mathematical
concepts is particularly difficult. It also suggests instruc-
tional methods for teaching such concepts more effectively.
---------------------------------------------------------------
UPCOMING TALKS
Mar 4: Curtis Hardyck, Education and Psychology, UCB
Mar 11: Carlota Smith, Linguistics, University of Texas
(currently at the Center for Advanced Study in the
Behavioral Sciences)
Mar 18: John Haviland, Anthropology, Austrailian National
University (currently at the Center for Advanced
Study in the Behavioral Sciences)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bjorn Lindblom, Linguistics, University of Stock-
holm; Peter MacNeilage, Linguistics, University of
Texas; Michael Studdart-Kennedy, Psychology, Queens
College (all currently at the Center for Advanced
Study in the Behavioral Sciences)
Apr 29: Dedre Gentner, Psychology, University of Illinois
at Champaign-Urbana
∂24-Feb-86 0910 EMMA@SU-CSLI.ARPA Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Feb 86 09:10:33 PST
Date: Mon 24 Feb 86 09:04:40-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar update
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
Two late notices of events for this week.
CSLI COLLOQUIUM
LOGIC OF POINTERS AND EVALUATIONS:
THE SOLUTION TO THE SELF-REFERENTIAL PARADOXES
Haim Gaifman
Mathematics Department The Hebrew University Jerusalem Israel
Visiting at SRI
February 27, 1986
Ventura Hall
Imagine the following exchange:
Max: What I am saying at this very moment is nonsense.
Moritz: Yes, what you have just said is nonsense.
Evidently Max spoke nonsense and Moritz spoke to the point. Yet Max
and Moritz appear to have asserted the same thing, namely: that Max
spoke nonsense. Or consider the following two lines:
line 1: The sentence written on line 1 is not true.
line 2: The sentence written on line 1 is not true.
Our natural intuition is that the self-referring sentence on line 1 is
not true (whatever sense could be made of it). Therefore the sentence
on line 2, which asserts this very fact, should be true. But what is
written on line 2 is exactly the same as what is written on line 1.
I shall argue that the unavoidable conclusion is that truth values
should be assigned here to sentence-tokens and that any system in
which truth is only type-dependent (e.g., Kripke's system and its
variants) is inadequate for treating the self-referntial situation.
Since the truth value of a token depends on the tokens to which it
points, whose values depend in their turn on the tokens to which they
point,and so on, the whole network of pointings (which might include
complicated loops) must be taken into account.
I shall present a simple formal way of representing such networks and
an algorithm for evaluating the truth values. On the input 'the
sentence on line 1' it returns GAP but on the input 'the sentence on
line 2' it returns TRUE. And it yields similarly intuitive results in
more complicated situations. For an overall treatment of
self-reference the tokens have to be replaced by the more general
pointers. A pointer is any obgect used to point to a sentence-type (a
token is a special case of pointer it points to the sentence of which
it is a token). Calling a pointer is like a procedural call in a
program, eventually a truth valye (TRUE, FALSE or GAP) is returned -
which is the output of the algorithm.
I shall discuss some more recent work (since my last SRI talk) -
variants of the system and its possible extensions to mathematical
powerful languages. Attempts to make such comprehensive systems throw
new light on the problem of constructing "universal languages".
-------
STANFORD MATHEMATICS DEPARTMENT COLLOQUIUM
Professor Ian Richards, University of Minnesota
"An axiomatic approach to computability in analysis"
Thursday, FEb. 27, 1986 at 4:15 P.M.
Room 380-W, Math. Bldg. 380, Stanford
Tea will be served starting at 3:30 P.M. before the talk. There will
be a dinner with the speaker after the talk.
-------
-------
∂24-Feb-86 1439 EMMA@SU-CSLI.ARPA re: Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Feb 86 14:38:11 PST
Date: Mon 24 Feb 86 14:29:35-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: re: Calendar update
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
The CSLI Colloquium by Haim Gaifman is at 4:15 on Thursday, February 27.
-------
∂26-Feb-86 1853 EMMA@SU-CSLI.ARPA Calendar February 27, No. 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 26 Feb 86 18:51:26 PST
Date: Wed 26 Feb 86 17:15:20-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar February 27, No. 5
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
February 27, 1986 Stanford Vol. 1, No. 5
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, February 27, 1986
12 noon TINLunch
Ventura Hall The Aspectual Effect of Mass Term and
Conference Room Bare Plural Arguments
by Erhard Hinrichs
Discussion led by Godehard Link (Link@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall Lexical Representation and Lexical Rules
Trailer Classroom Paul Kiparsky (Kiparsky@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Logic of Pointers and Evaluations---
Trailer Classroom The Solution to the Self-Referential Paradoxes
Haim Gaifman, Hebrew University
(Abstract on page 2)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, March 6, 1986
12 noon TINLunch
Ventura Hall Women, Fire, and Dangerous Things
Conference Room by George Lakoff
Discussion led by Douglas Edwards (Edwards@sri-ai)
(Abstract on page 3)
2:15 p.m. CSLI Seminar
Ventura Hall Lexical Representation and Lexical Rules
Trailer Classroom Annie Zaenen (Zaenen.pa@xerox)
(Abstract on page 3)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Logical Specifications for Feature Structures in
Unification Grammars
William C. Rounds and Robert Kasper
University of Michigan
(Abstract on page 3)
--------------
!
Page 2 CSLI Calendar February 27, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S COLLOQUIUM
Logic of Pointers and Evaluations:
The Solution to the Self-Referential Paradoxes
Haim Gaifman
Mathematics Department, The Hebrew University
Imagine the following exchange:
Max: What I am saying at this very moment is nonsense.
Moritz: Yes, what you have just said is nonsense.
Evidently Max spoke nonsense and Moritz spoke to the point. Yet Max
and Moritz appear to have asserted the same thing, namely: that Max
spoke nonsense. Or consider the following two lines:
line 1: The sentence written on line 1 is not true.
line 2: The sentence written on line 1 is not true.
Our natural intuition is that the self-referring sentence on line 1 is
not true (whatever sense could be made of it). Therefore the sentence
on line 2, which asserts this very fact, should be true. But what is
written on line 2 is exactly the same as what is written on line 1.
I shall argue that the unavoidable conclusion is that truth values
should be assigned here to sentence-tokens and that any system in
which truth is only type-dependent (e.g., Kripke's system and its
variants) is inadequate for treating the self-referential situation.
Since the truth value of a token depends on the tokens to which it
points, whose values depend in their turn on the tokens to which they
point, and so on, the whole network of pointings (which might include
complicated loops) must be taken into account.
I shall present a simple formal way of representing such networks
and an algorithm for evaluating the truth values. On the input `the
sentence on line 1' it returns GAP but on the input `the sentence on
line 2' it returns TRUE. And it yields similarly intuitive results in
more complicated situations. For an overall treatment of
self-reference the tokens have to be replaced by the more general
pointers. A pointer is any object used to point to a sentence-type (a
token is a special case of pointer it points to the sentence of which
it is a token). Calling a pointer is like a procedural call in a
program, eventually a truth value (TRUE, FALSE, or GAP) is
returned---which is the output of the algorithm.
I shall discuss some more recent work (since my last SRI
talk)---variants of the system and its possible extensions to
mathematical powerful languages. Attempts to make such comprehensive
systems throw new light on the problem of constructing ``universal
languages''.
!
Page 3 CSLI Calendar February 27, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
A discussion of pragmatic effects in `there'-constructions
from George Lakoff's ``Women, Fire, and Dangerous Things''
led by Douglas Edwards
Lakoff analyzes `there'-constructions intensively in an appendix
(itself book-length) to his ``Women, Fire, and Dangerous Things.'' He
argues that the `syntactic' behavior of `there'-constructions is
dependent upon the semantic interpretation, and even the pragmatic
force, associated with them. He uses the expression ``grammatical
construction'' to refer to such an association of a set of conditions
on syntactic form with a set of conditions on meaning.
Lakoff derives some fairly subtle behaviors of embedded `there'-
constructions from the following pragmatic principle: ``Clauses
expressing a reason allow speech act constructions that convey state-
ments, and the content of the statement equals the reason expressed.''
In spite of the pragmatic nature of this principle, the unacceptability
of sentences violating it seems (to Lakoff?) to be indistinguishable
from the unacceptability of sentences that are syntactically ill-formed.
Are sentences violating Lakoff's principle intuitively
distinguishable from those that are ill-formed for purely syntactic
reasons? If not, can a theory that avoids the primitive notion of
``grammatical construction'' (and perhaps tries for a relatively
autonomous syntax) account for Lakoff's phenomena?
--------------
NEXT WEEK'S SEMINAR
Lexical Rules and Lexical Representations
Mark Gawron, Paul Kiparsky, Annie Zaenen
February 20, 27, and March 6
This is the third of a series of talks reflecting the ongoing
elaboration of a model of lexical representation. In the first, Mark
Gawron discussed a frame-based lexical semantics and its relationship
to a theory of lexical rules. In the second, Paul Kiparsky proposed a
theory of the linking of thematic roles to their syntactic realizations,
emphasizing its interactions with a theory of morphology; and in this
one, a sub-workgroup of the lexical project will sketch a unification
based representation for the interaction of the different components
of the lexical representation and both syntax and sentence semantics.
--------------
NEXT WEEK'S COLLOQUIUM
Logical Specifications for Feature Structures
in Unification Grammars
William C. Rounds and Robert Kasper, University of Michigan
In this paper we show how to use a simple modal logic to give a
complete axiomatization of disjunctively specified feature or record
structures commonly used in unification-based grammar formalisms in
computational linguistics. The logic was originally developed as a logic
to explain the semantics of concurrency, so this is a radically different
application. We prove a normal form result based on the idea of Nerode
equivalence from finite automata theory, and we show that the satisfi-
ability problem for our logical formulas is NP-complete. This last
result is a little surprising since our formulas do not contain negation.
Finally, we show how the unification problem for term-rewriting systems
can be expressed as the satisfiability problem for our formulas.
!
Page 4 CSLI Calendar February 27, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
PIXELS AND PREDICATES
Sam: A Text Editor Based on Structural Regular Expressions
Rob Pike, Bell Labs
1:00 pm, Monday, March 3, CSLI trailers
(This meeting is Monday not Wednesday, the usual meeting date)
This talk will assume some familiarity with the `cut and paste'
model of editing supported by the mouse interface, and will focus on
the command language.
`Sam' has two interfaces: a mouse-based language very similar to
`jim'(9.1), and a command language reminiscent of `ed'(1). `Sam' is
based on `structural regular expressions': the application of regular
expressions to describe the form of a file. Conventional Unix tools
think of their input as arrays of lines. The new notation makes it
easy to make changes to files regardless of their structure, to define
structure within the elements (e.g., the pieces of a line), and to
change the apparent shape of a file according to the change being
made.
The use of structural regular expressions makes it possible for the
mouse and command languages to operate on the same objects, so that
editing commands from the mouse and keyboard may be mixed comfortably
and effectively. Of course, either mouse or keyboard may be used
exclusively of the other, so `sam' can be used as if it were `jim',
`ed' or even `sed'---a `stream' version of `sam' is forthcoming.
--------------
LOGIC SEMINAR
The Polynomial Time Hierarchy and Fragments of Bounded Arithmetic
Dr. Samuel Buss
Mathematical Sciences Research Institute, Berkeley
4:15, Monday, March 3, Math Faculty Lounge
-------
∂27-Feb-86 1507 EMMA@SU-CSLI.ARPA Calendar Addition
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 15:07:50 PST
Date: Thu 27 Feb 86 15:04:04-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar Addition
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
SYSTEM DESCRIPTION AND DEVELOPEMNT TALK
The Perspective Concept in Computer Science
Kristen Nygaard (University of Oslo)
Monday, March 3, 12:15pm
CSLI Trailer Classroom (in front of Ventura Hall)
Notions like functional programing, logic programming, and
object-oriented programming embed different ways of understanding the
computing process---different perspectives. Also, methods for system
development will reflect different perspectives upon the nature of
organizations and society. It is important for computer scientists to
be aware of these perspectives and to take them into account in their
professional work. The lecture examines the nature of the perspective
concept and discusses a number of examples.
-----
Nygaard was one of the inventors of SIMULA, the first object-oriented
programming language. --Terry Winograd
-------
-------
∂27-Feb-86 1529 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 4 (Curtis Hardyck)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 15:29:22 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Thu 27 Feb 86 15:22:44-PST
Received: by cogsci.berkeley.edu (5.45/1.9)
id AA05011; Wed, 26 Feb 86 15:54:31 PST
Date: Wed, 26 Feb 86 15:54:31 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Message-Id: <8602262354.AA05011@cogsci.berkeley.edu>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu,
seminars@ucbvax.berkeley.edu
Subject: UCB Cognitive Science Seminar--March 4 (Curtis Hardyck)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar -- IDS 237B
Tuesday, March 4, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``COGNITIVE MODELS OF HUMAN CEREBRAL LATERALIZATION:
A TUTORIAL REVIEW''
Curtis Hardyck
Department of Psychology and School of Education,
University of California at Berkeley
Models of human cerebral functioning have ranged from
notions of extreme anatomical specificity to beliefs in global
functioning.
Within the field of cerebral lateralization, opinions have
ranged from positions favoring extreme lateralization (almost
all functions localized in one hemisphere) to bilateralization
(almost all functions existing in both hemispheres). Intermin-
gled with these positions have been promulgations of hemispher-
icity as polar opposites, e.g. right brain (creative insight-
fulness) vs left brain (lackluster drudgery), which have been
adopted into popular culture.
I will provide a brief historical review of this problem
and a discussion of current cognitive models of lateralization
appropriate for examination within a cognitive science frame-
work.
---------------------------------------------------------------------
UPCOMING TALKS
Mar 11: Carlota Smith, Linguistics, University of Texas
(currently at the Center for Advanced Study in the
Behavioral Sciences)
Mar 18: John Haviland, Anthropology, Austrailian National
University (currently at the Center for Advanced
Study in the Behavioral Sciences)
Mar 25: Martin Braine, Psychology, NYU (currently at Stan-
ford)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bj"orn Lindblom, Linguistics, University of Stock-
holm; Peter MacNeilage, Linguistics, University of
Texas; Michael Studdart-Kennedy, Psychology, Queens
College (all currently at the Center for Advanced
Study in the Behavioral Sciences)
Apr 29: Dedre Gentner, Psychology, University of Illinois
at Champaign-Urbana
--------------------------------------------------------------------
ELSEWHERE ON CAMPUS
On Monday, March 3, Prof. Robert Siegler of the Psychology
Department at Carnegie-Mellon will give a talk entitled "Stra-
tegy choice procedures: how do children decide what to do?"
from noon to 2:00 p.m. in the Beach Room, 3105 Tolman Hall.
∂27-Feb-86 1548 @SU-CSLI.ARPA:GAIFMAN@SRI-AI.ARPA Gaifman's talk today
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 15:47:09 PST
Received: from SRI-AI.ARPA by SU-CSLI.ARPA with TCP; Thu 27 Feb 86 15:37:44-PST
Date: Thu 27 Feb 86 11:46:50-PST
From: GAIFMAN@SRI-AI.ARPA
Subject: Gaifman's talk today
To: logmtc@SU-AI.ARPA, friends@SU-CSLI.ARPA
I tried to avoid a clash but found no alternative. So the talk
entitled:
Logic of pointers and evaluations:
The solution to the self-referential paradoxes.
Will take place as scheduled today Feb 27, 16:15 Ventura Hall (the colloquium
hall in the trailers).
-------
∂03-Mar-86 1245 @SU-CSLI.ARPA:Bush@SRI-KL.ARPA housing
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 3 Mar 86 12:45:15 PST
Received: from SRI-KL.ARPA by SU-CSLI.ARPA with TCP; Tue 4 Mar 86 12:59:22-PST
Date: Mon 3 Mar 86 11:11:33-PST
From: Marcia Bush <Bush@SRI-KL>
Subject: housing
To: friends%SU-CSLI@SRI-KL
cc: Bush@SRI-KL, Kopec@SRI-KL
Gary Kopec and I need a place (1 bedroom or larger, preferably
Palo Alto or north) to housesit or rent for the months of May
and June. We are both non-smokers with no pets. Any leads
would be appreciated.
Marcia Bush
Bush@sri-kl
496-4603
Gary Kopec
Kopec@sri-kl
496-4606
-------
∂04-Mar-86 0918 CHRIS@SU-CSLI.ARPA Honda civic with lights on.
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Mar 86 09:18:18 PST
Date: Tue 4 Mar 86 09:14:00-PST
From: Chris Menzel <CHRIS@SU-CSLI.ARPA>
Subject: Honda civic with lights on.
To: friends@SU-CSLI.ARPA
License #792 VJY.
-------
∂04-Mar-86 1531 @SU-CSLI.ARPA:GAIFMAN@SRI-AI.ARPA "I'm talking nonsense" -supervaluations
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Mar 86 15:31:13 PST
Received: from SRI-AI.ARPA by SU-CSLI.ARPA with TCP; Tue 4 Mar 86 15:28:33-PST
Date: Tue 4 Mar 86 15:27:39-PST
From: GAIFMAN@SRI-AI.ARPA
Subject: "I'm talking nonsense" -supervaluations
To: friends@SU-CSLI.ARPA
_
This is intended for those who stayed for the discussion after my
"Logic of pointers " -talk.
Etchemendy and Barwise (and Israel?) would prefer to treat the
"I'm talking nonsense" sentence not as nonsense but as false (where
'nonsense' is by definition 'neither true nor false'). The sentence
does indeed come out as false if instead of the strong Kleene table
one uses supervaluatins. In this procedure, if a sentence comes out
as true (false) under all assignments of standard (T,F) truth values
to the pointers, then every pointer to this sentence gets T (F).
Thus "p is neither true nor false" comes out as false. There is an
obvious supervaluation variant to my algorithm (just as there is the
supervaluation variant of Kripke's model) and in this variant the
sentence is evaluated F.
My own intuition is that it is nonsense, so in this case I would
prefer the strong Kleene evaluation. In any case this appears now
to be a side issue.
-------
-------
-------
∂05-Mar-86 1709 EMMA@SU-CSLI.ARPA Calendar, March 6, No. 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 5 Mar 86 17:08:19 PST
Date: Wed 5 Mar 86 16:57:49-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, March 6, No. 6
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
March 6, 1986 Stanford Vol. 1, No. 6
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, March 6, 1986
12 noon TINLunch
Ventura Hall Women, Fire, and Dangerous Things
Conference Room by George Lakoff
Discussion led by Douglas Edwards (Edwards@sri-ai)
2:15 p.m. CSLI Seminar
Ventura Hall CANCELLED
Trailer Classroom to be rescheduled
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Logical Specifications for Feature Structures in
Unification Grammars
William C. Rounds and Robert Kasper
University of Michigan
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, March 13, 1986
12 noon TINLunch
Ventura Hall Brains, Behavior, and Robotics
Conference Room by James Albus
Discussion led by Pentti Kanerva (Kanerva@riacs.arpa)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Attempts and Performances: A Theory of Speech Acts
Trailer Classroom Phil Cohen (Pcohen@sri-ai)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Turing Auditorium Self Reference and Self Consciousness
Raymond Smullyan, Indiana University
--------------
ANNOUNCEMENT
Please note that the March 6 seminar on Lexical Representation and
Lexical Rules has been cancelled; it will be rescheduled at a later
date.
Also note that next week's colloquium will be in Turing Auditorium
which is in the Earth Sciences building next to Terman Engineering.
!
Page 2 CSLI Calendar March 6, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
Brains, Behavior, and Robotics
by James S. Albus
Discussion led by Pentti Kanerva (Kanerva@riacs.arpa)
In 1950, Alan Turing wrote, ``We may hope that machines will
eventually compete with men in all purely intellectual fields. But
which are the best ones to start with? . . . Many people think that
a very abstract activity, like the playing of chess, would be best.
It can also be maintained that it is best to provide the machine with
the best sense organs that money can buy, and then teach it to
understand. . . . This process could follow the normal teaching of a
child. Things would be pointed out and named, etc. Again I do not
know what the right answer is, but I think that both approaches should
be tried.'' (Quoted by Albus on p. 5.)
``Brains, Behavior, and Robotics'' takes this ``Turing's second
approach'' to artificial intelligence, the first being the pursuit of
abstract reasoning. The book combines over a decade of research by
Albus. It is predicated on the idea that to understand human
intelligence we need to understand the evolution of intelligence in
the animal kingdom. The models developed are mathematical
(computational), but one of their criteria is neurophysiological
plausibility. Although the research is aimed at understanding the
mechanical basis of cognition, Albus also discusses philosophical and
social implications of his work.
--------------
AFA SEMINAR
A Proof Using AFA That Maximal Fixed Points are Final
Peter Aczel, University of Manchester, Visiting CSLI
2-3:30, March 7, Ventura Conference Room
--------------
LOGIC SEMINAR
Interpretations in Arithmetic
Dr. Alex Wilkie, University of Oxford, visiting UC Berkeley
12:00, Monday, March 10, Math Faculty Lounge
(Note the change of time for this particular meeting.)
--------------
SYSTEM DESCRIPTION AND DEVELOPMENT TALK
The Perspective Concept in Computer Science
12:15, Monday, March 10, Ventura Conference Room
Our topic next Monday (March 10) will be a continued discussion
(introduced by Jens Kaasboll) of the issues raised by Kristen Nygaard
in his talk about perspectives on the use of computers:
Regardless of definitions of ``perspective'', there exist many
perspectives on computers. Computers are regarded as systems, tools,
institutions, toys, partners, media, symbols, etc. Even so, there
exist system description languages but no tool, or institution, or
... languages. What do the other perspectives reflect, which make
them less attractive for language designers? Suggestive answer: The
system perspective is the definite computer science perspective in
which the processes inside the computers are regarded as the goal of
our work. Viewed through some of the other perspectives, the computer
is seen as a means for achieving ends outside the computer, i.e., the
needs of people using the computers.
!
Page 3 CSLI Calendar March 6, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SUMMARY OF THE SYSTEM DESCRIPTION AND DEVELOPMENT TALK
The Perspective Concept in Computer Science
Kristen Nygaard (University of Oslo)
Monday, March 3
Notions like functional programing, logic programming, and
object-oriented programming embed different ways of understanding the
computing process---different perspectives. Also, methods for system
development will reflect different perspectives upon the nature of
organizations and society. It is important for computer scientists to
be aware of these perspectives and to take them into account in their
professional work. The lecture examined the nature of the perspective
concept and discussed a number of examples.
-------
∂06-Mar-86 0943 EMMA@SU-CSLI.ARPA Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 6 Mar 86 09:43:15 PST
Date: Thu 6 Mar 86 09:37:00-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar update
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
The following arrived after the Calendar was sent out.
CSLI SEMINAR
Attempts and Performances: A Theory of Speech Acts
Phil Cohen (Pcohen@sri-ai)
2:15, Thursday, March 13, Ventura Trailer Classroom
I will present a theory of speech acts, developed with Hector
Levesque, in which illocutionary acts are defined as ATTEMPTS---as
actions done with certain beliefs and goals. The basis on which the
agent holds the relevant beliefs and goals derives from a theory of
rational interaction. However, there is no primitive notion of an
illocutionary act. The theory meets a number of adequacy criteria for
theories of speech acts. In particular, I will show how it handles
performatives.
-------
∂06-Mar-86 1011 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 11 (Carlota Smith)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 6 Mar 86 10:11:45 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Thu 6 Mar 86 10:03:45-PST
Received: by cogsci.berkeley.edu (5.45/1.9)
id AA29379; Thu, 6 Mar 86 09:56:27 PST
Date: Thu, 6 Mar 86 09:56:27 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8603061756.AA29379@cogsci.berkeley.edu>
To: cogsci-friends@cogsci.berkeley.edu
Subject: UCB Cognitive Science Seminar--March 11 (Carlota Smith)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, March 11, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``A speaker-based approach to aspect''
Carlota Smith
University of Texas and
Institute for Advanced Study in the Behavioral Sciences
I will present a general account that focusses on how
aspect contributes to the point of view of a sentence, and on
differences between aspectual systems. In this approach aspec-
tual systems have two components, situation aspect and
viewpoint aspect. The components are developed in terms of
idealizations that underlie the familiar Aristotelian classifi-
cation of situations. The idealizations specify the distin-
guishing characteristics of situations and viewpoints but
underdetermine the temporal properties of each. This allows
both for similarities and rather subtle differences in the way
languages realize basic aspectual notions. I will discuss some
of these differences in the perfective and imperfective
viewpoints, using examples from Mandarin Chinese, Japanese,
French, and English. I will also discuss variations in the way
languages realize basic situation types. Within the pattern of
their language speakers choose the situation and viewpoint
aspect of a sentence, presenting an actual situation as an
exemplar of a particular situation type.
---------------------------------------------------------------
UPCOMING TALKS
Mar 18: John Haviland, Anthropology, Austrailian National
University (currently at the Center for Advanced
Study in the Behavioral Sciences)
Mar 25: Martin Braine, Psychology, NYU (currently at Stan-
ford)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bj"λorn Lindblom, Linguistics, University of Stock-
holm; Peter MacNeilage, Linguistics, University of
Texas; Michael Studdart-Kennedy, Psychology, Queens
College (all currently at the Center for Advanced
Study in the Behavioral Sciences)
Apr 29: Dedre Gentner, Psychology, University of Illinois
at Champaign-Urbana
----------------------------------------------------------------
ELSEWHERE ON CAMPUS
On Monday, March 10, Prof. Joseph Campos of the Psychology
Department of the University of Denver will speak on "The
importance of self-produced locomotion for psychological
development" from noon to 2:00 p.m. in the Beach Room, 3105
Tolman Hall.
On Tuesday, March 11, Prof. Linda A. Waugh of the Departments
of Modern Languages and Linguistics and of Comparative Litera-
ture at Cornell University (currently at the Stanford Humani-
ties Center) will speak on "Tense-aspect and discourse func-
tion: The French simple past in journalistic discourse" at the
Linguistics Group meeting at 8:00 p.m. in 117 Dwinelle Hall.
∂12-Mar-86 1015 EMMA@SU-CSLI.ARPA Tomorrow's CSLI colloquium
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Mar 86 10:15:42 PST
Date: Wed 12 Mar 86 10:03:19-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Tomorrow's CSLI colloquium
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
The colloquium tomorrow at 4:15 by Raymond Smullyan of Indiana
University will NOT be in Turing Auditorium as stated in last week's
CSLI Calendar. It will instead be in Jordan Hall (Bldg. 420 in the
Quad), room 040.
-------
∂12-Mar-86 1641 EMMA@SU-CSLI.ARPA Calendar, March 13, No. 7
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Mar 86 16:40:58 PST
Date: Wed 12 Mar 86 16:31:56-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, March 13, No. 7
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
March 13, 1986 Stanford Vol. 1, No. 7
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, March 13, 1986
12 noon TINLunch
Ventura Hall Brains, Behavior, and Robotics
Conference Room by James Albus
Discussion led by Pentti Kanerva (Kanerva@riacs.arpa)
2:15 p.m. CSLI Seminar
Ventura Hall Attempts and Performances: A Theory of Speech Acts
Trailer Classroom Phil Cohen (Pcohen@sri-ai)
(Abstract on page 2}
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Jordan Hall Self-Reference and Self-Consciousness
Room 040 Raymond Smullyan, Indiana University
(Abstract on page 2)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, March 20, 1986
12 noon TINLunch
Ventura Hall Models, Metaphysics and the Vagaries of Empiricism
Conference Room by Marx W. Wartofsky
Discussion led by Ivan Blair (Blair@su-csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall The Structural Meaning of Clause Type: Capturing
Trailer Classroom Cross-modal and Cross-linguistic Generalizations
Dietmar Zaefferer (G.Zaeff@su-csli)
(Abstract on page 3)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium this week
--------------
ANNOUNCEMENT
Please note that tomorrow's colloquium will NOT be in Turing
Auditorium as stated in last week's CSLI Calendar. It will instead be
in Jordan Hall (Bldg. 420 in the Quad), room 040.
!
Page 2 CSLI Calendar March 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S SEMINAR
Attempts and Performances: A Theory of Speech Acts
Phil Cohen (Pcohen@sri-ai)
I will present a theory of speech acts, developed with Hector
Levesque, in which illocutionary acts are defined as ATTEMPTS---as
actions done with certain beliefs and goals. The basis on which the
agent holds the relevant beliefs and goals derives from a theory of
rational interaction. However, there is no primitive notion of an
illocutionary act. The theory meets a number of adequacy criteria for
theories of speech acts. In particular, I will show how it handles
performatives.
--------------
THIS WEEK'S COLLOQUIUM
Self-Reference and Self-Consciousness
Raymond Smullyan
Oscar Owing Professor of Philosophy, Indiana University
Professor Emeritus
City-University of New York-Lehman College and Graduate center
We consider some epistemic versions of Godel's Incompleteness
Theorem---e.g., conditions under which a logician cannot believe he or
she is consistent without losing his or her consistency. A related
theorem of Lob gives information about beliefs that of their own
nature are necessarily self-fulfilling.
--------------
NEXT WEEK'S TINLUNCH
Models, Metaphysics and the Vagaries of Empiricism
by Marx W. Wartofsky
Discussion led by Ivan Blair (Blair@su-csli)
In the introduction to the collection of his articles from which
the paper for this TINlunch is taken, Wartofsky says that his concern
is with `the notion of representation, and in particular, the role and
nature of the model, in the natural sciences, in theories of
perception and cognition, and in art.' In `Meaning, Metaphysics and
the Vagaries of Empiricism,' he explores the existential commitment
that should accompany the creation and use of a model, from the
perspective of a critical empiricism. Wartofsky considers six grades
of existential commitment, or ways of construing the ontological
claims of a model, ranging from the ad hoc analogy to a true
description of reality. Critical of the attempt by empiricists to
reduce theoretical statements to assertions about sense perception,
Wartofsky seeks to ground existence claims in what he calls the common
understanding, which is associated with everyday language
representations of experience.
I intend the issues addressed in this article to provide the
framework for a general discussion of the relation between ontology
and epistemology.
!
Page 3 CSLI Calendar March 13, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S SEMINAR
The Structural Meaning of Clause Type:
Capturing Cross-modal and Cross-linguistic Generalizations
Dietmar Zaefferer (G.Zaeff@su-csli)
Theories of speech acts usually take notions like declarative
sentence, imperative sentence, etc. as input, i.e., they treat notions
of sentence mood (form type), as primitives, and then try to correlate
them adequately with notions of illocution type (function type).
Linguists, on the other hand, are interested in taking the former
apart and determining the grammatical properties of the sentence moods
as form types.
I will argue, against the assumption that sentence type indicators
have no meaning at all, (a) that they do have some (although weak)
structural meaning that is relevant for the illocutionary potential of
the sentence, and (b), that in determining this structural meaning, it
is crucial to account for at least three kinds of connections sentence
types are involved in:
(i) The place of the sentence types in the larger family of clause
types (e.g., relation of (main) yes-no interrogatives and
(subordinate) whether-interrogatives)
(ii) The occurrence of construction types in different clause types
(e.g., wh-constructions in relatives, interrogatives, exclamatives,
no-matter-conditional antecedents)
(iii) Cross-linguistic similarities in the internal structure of
clause types (e.g., the distinction between a yes-no interrogative
with an indefinite and a wh-interrogative seems to result frequently
from different combinations of the same elements: an indefinite and an
interrogative marker)
--------------
LINGUISTICS DEPARTMENT COLLOQUIUM
Empty Categories and Configuration
Kenneth Hale
Ferrari P. Ward Professor of Linguistics at MIT
3:30 p.m., Tuesday, March 18
History (Bldg. 200) Rm. 217, Stanford University
followed by a reception in Linguistics (Bldg. 100)
Some putative non-configurational languages exhibit certain
problematic disparities between overt phonologically realized phrase
structure and the abstract grammatical structure projected from the
lexicon. This paper will examine one such disparity in an attempt to
formulate a preliminary conception of non-configurationality within a
general theory of grammar.
This talk is sponsored by the Linguistics Department of Stanford
University and is part of the 1985-86 Ferguson/Greenberg Lecture
Series on Language Universals and Sociolinguistics.
-------
∂12-Mar-86 1652 @SU-CSLI.ARPA:admin%cogsci@BERKELEY.EDU UCB Cognitive Science Seminar--March 18 (John Haviland)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 12 Mar 86 16:51:59 PST
Received: from cogsci.berkeley.edu ([128.32.130.5].#Internet) by SU-CSLI.ARPA with TCP; Wed 12 Mar 86 16:38:37-PST
Received: by cogsci.berkeley.edu (5.45/1.9)
id AA08454; Wed, 12 Mar 86 16:33:14 PST
Date: Wed, 12 Mar 86 16:33:14 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Message-Id: <8603130033.AA08454@cogsci.berkeley.edu>
To: allmsgs@cogsci.berkeley.edu, cogsci-friends@cogsci.berkeley.edu,
seminars@ucbvax.berkeley.edu
Subject: UCB Cognitive Science Seminar--March 18 (John Haviland)
Cc: admin@cogsci.berkeley.edu
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, March 18, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Complex Referential Gestures in Guugu Yimidhirr''
John B. Haviland
Dept. of Anthropology, Australian National University
(currently at Institute for Advanced Study in the Behavioral Sciences)
Abstract
Ordinary talk depends on interlocutors' abilities to
construct and maintain some degree of shared perspective over
some domain of shared knowledge, given some negotiated
understanding of what the circumstances are. Aspects of per-
spective, references to universes of discourse, and
pointers to context are, of course, encoded in utterances.
Routinely, though, what is uttered interacts with what
remains unsaid: what is otherwise indicated, or what is
implicated by familiar conversational principles. I will
begin by examining the elaborate linguistic devices one Aus-
tralian language provides for talking about location and
motion. I will then connect the linguistic representation of
space (and the accompanying knowledge speakers must have of
space and geography) to non-spoken devices --- pointing ges-
tures --- that contribute to the bare referential content of
narrative performances. I will show that simply parsing a nar-
rative, or tracking its course, requires attention to the ges-
ticulation that forms part of the process of utterance. More-
over, I will show how, in this ethnographic context, the
meaning of a gesture (or of a word, for that matter) may
depend both on a practice of referring (only within which can
pointing be pointing at something) and on the construction of
a complex and shifting conceptual (often social) map. Finally
I will discuss ways that the full import of a gesture
(again, like a word) may, in context, go well beyond merely
establishing its referent.
---------------------------------------------------------------------
UPCOMING TALKS
Mar 25: Martin Braine, Psychology, NYU (currently at Stanford)
Apr 1: Elisabeth Bates, Psychology, UCSD
Apr 8: Bj"λorn Lindblom, Linguistics, University of Stock-
holm; Peter MacNeilage, Linguistics, University of
Texas; Michael Studdart-Kennedy, Psychology, Queens
College (all currently at the Center for Advanced
Study in the Behavioral Sciences)
Apr 29: Dedre Gentner, Psychology, University of Illinois
at Champaign-Urbana
May 6: Paul Rosenbloom, Computer Science and Psychology,
Stanford
-------------------------------------------------------------------------------
ELSEWHERE ON CAMPUS
On Monday, March 17, at the Anthropology Department Seminar,
Rick Shweder of the Committee on Human Development, University
of Chicago, and the Center for Advanced Study in Palo Alto,
will speak on "Symbolic and irrationalist interpretations of
other cultures: Is there a rationalist alternative?" from 3 to
5 p.m. in 160 Kroeber.
∂13-Mar-86 0920 INGRID@SU-CSLI.ARPA Garage Sale
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 09:19:31 PST
Date: Thu 13 Mar 86 09:09:19-PST
From: Ingrid Deiwiks <INGRID@SU-CSLI.ARPA>
Subject: Garage Sale
To: Friends@SU-CSLI.ARPA
*****************************************
G I A N T G A R A G E S A L E
*****************************************
MARCH 15 AND 16 -- 10 AM TO 5 PM
Furniture, Clothes, Motor Lawn Mower, Appliances, TV Sets, Lamps,
Books, China, and much more.
173 Santa Margarita Avenue, Menlo Park.
-------
∂13-Mar-86 1027 @SU-CSLI.ARPA:JROBINSON@SRI-WARBUCKS.ARPA Re: Garage Sale
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 10:27:36 PST
Received: from SRI-WARBUCKS.ARPA by SU-CSLI.ARPA with TCP; Thu 13 Mar 86 10:15:26-PST
Date: Thu 13 Mar 86 10:17:33-PST
From: Jane (aka) Jrobinson <JROBINSON@SRI-WARBUCKS.ARPA>
Subject: Re: Garage Sale
To: INGRID@SU-CSLI.ARPA
Cc: Friends@SU-CSLI.ARPA, JROBINSON@SRI-WARBUCKS.ARPA
Message-ID: <VAX-MM(180)+TOPSLIB(115)+PONY(0) 13-Mar-86 10:17:33.SRI-WARBUCKS.ARPA>
In-Reply-To: Message from "Ingrid Deiwiks <INGRID@SU-CSLI.ARPA>" of Thu
13 Mar 86 09:09:19-PST
REPLY-TO: JRobinson@SRI-AI
The use of the ARPA net to advertise private sales is a big no-no, and
people have been kicked off the net for it, and it endangers the
use of the net by the organization those people belong to. It CAN
happen.
J
-------
∂13-Mar-86 1046 POSER@SU-CSLI.ARPA Re: Garage Sale
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 10:46:12 PST
Date: Thu 13 Mar 86 10:37:23-PST
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Re: Garage Sale
To: JRobinson@SRI-AI.ARPA
cc: INGRID@SU-CSLI.ARPA, Friends@SU-CSLI.ARPA
In-Reply-To: Message from "Jane (aka) Jrobinson <JROBINSON@SRI-WARBUCKS.ARPA>" of Thu 13 Mar 86 10:19:22-PST
Roughly the same effect can be obtained by sending just to the local bboards,
which I believe is legit so long as the messages don't go out over the
ARPAnet.
-------
∂13-Mar-86 1059 INGRID@SU-CSLI.ARPA Garage Sale
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 10:58:57 PST
Date: Thu 13 Mar 86 10:39:01-PST
From: Ingrid Deiwiks <INGRID@SU-CSLI.ARPA>
Subject: Garage Sale
To: Friends@SU-CSLI.ARPA
Sorry, I won't do it again!
-------
∂17-Mar-86 1706 EMMA@SU-CSLI.ARPA Friends Mailing List
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 17:06:13 PST
Date: Mon 17 Mar 86 16:55:06-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Friends Mailing List
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
From now on all mail sent to FRIENDS@su-csli will be checked for
applicability before being remailed to the actual list.
Messages sent to FRIENDS@su-csli should contain information about
local (mid-peninsula) events sponsored by CSLI or of interest to CSLI
researchers. Examples are the calendar, the monthly, and calendar
updates.
If you wish to receive information about events sponsored by the
Berkeley Cognitive Science Program, please send a message to
admin%cogsci@berkeley.edu asking to be put on the cogsci-friends list.
(The CSLI bboard will continue to get Berkeley Cognitive Science
announcements.)
Yours
Emma Pease
(Emma@su-csli.arpa)
-------
∂17-Mar-86 1750 EMMA@SU-CSLI.ARPA CSLI Monthly
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 17:50:18 PST
Date: Mon 17 Mar 86 16:58:21-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
The first issue of the CSLI Monthly will be mailed out tomorrow.
We are sending this message out today to warn you that it is quite
large (about 15 printed pages) and might cause problems, if your mail
file is nearly full. If you absolutely cannot handle this size
message, send me (Emma@su-csli.arpa) a message before noon (of March
18, Pacific time), and I'll drop you from the mailing list this one
time. Future Monthly's will be much smaller and should cause no
problems.
This issue of the CSLI Monthly is stored in <CSLI>CSLI-Monthly.03-86
on su-csli.arpa and can be gotten by ftp as of this afternoon, for
those of you who like to look at things early.
Yours,
Emma Pease
-------
∂17-Mar-86 1823 EMMA@SU-CSLI.ARPA re: CSLI Monthly
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 18:23:08 PST
Date: Mon 17 Mar 86 17:05:24-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: re: CSLI Monthly
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
Hardcopies will be mailed out this Thursday with the CSLI Calendar
and will be available in the Ventura Front Hall.
Emma Pease
-------
∂18-Mar-86 1711 EMMA@SU-CSLI.ARPA CSLI Monthly, part I
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 18 Mar 86 17:09:43 PST
Date: Tue 18 Mar 86 15:59:11-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, part I
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
C S L I M O N T H L Y
---------------------------------------------------------------------
March 15, 1986 Stanford Vol. 1, No. 1
---------------------------------------------------------------------
A monthly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
---------------------
Editor's note
This is the first issue of CSLI's monthly report of research
activities. This issue introduces CSLI and then characterizes each of
its current research projects; following issues will report on
individual projects in more detail and discuss some of the research
questions raised here.
---------------------
What is CSLI?
CSLI is a research institute devoted to building theories about the
nature of information and how it is conveyed, processed, stored, and
transformed through the use of language and in computation.
Researchers include computer scientists, linguists, philosophers,
psychologists, and workers in artificial intelligence from several San
Francisco Bay Area institutions as well as graduate students,
postdoctoral fellows, and visiting scholars from around the world.
Where is it located?
CSLI's location is one of its more interesting features: it is
discontinuous. CSLI research and activities are conducted at SRI
International, Stanford University, and Xerox PARC. But there is a
headquarters, Ventura Hall at Stanford, where CSLI's central
administration is located, most visitors and students are housed, and the
majority of larger events take place. Most CSLI researchers spend some
part of their time each week at Ventura, talking with students, postdocs,
and researchers from sites other than their own.
What is its research goal?
In using the rich resources language provides for dealing with
information, we all show mastery of a powerful apparatus which
includes concepts of meaning, reference, knowledge, desire, and
intention. CSLI's goal is to develop theories of information that are
explicit and systematic and at least as rich as our implicit
understanding, and to apply these theories to the analysis of
language. The implications of these theories should be far-reaching,
not only for the study of natural languages, but also for the analysis
and design of computer languages.
Current efforts to endow a computer with human information-processing
abilities are being made without benefit of a theory of information
content. This is like trying to design a calculator without a precise
formulation of the laws of arithmetic: some of the pieces will be
right, but their unification into a smoothly running whole is
unlikely, if not impossible. For example, natural language database
query systems can handle restricted uses of language, but may yield
unexpected results when faced with ambiguity, anaphora, or indirect
speech acts. Other artificial intelligence programs count on
similarly limited domains such as characteristics of specific diseases
or rules of game-playing. In real-time applications, unexpected
failures are often the result of our inability to account fully for
interactions of machine-processes with real world events. Even if we
cannot resolve all the intricacies, a full characterization of them
will increase our understanding of the limitations of computer
technology and influence decisions we make about its use.
CSLI researchers conceive of their work as part of the development of
a newly emerging science of information, computation, and cognition.
They are convinced that a theory of information cannot be built by
individuals from any one of the disciplines that have traditionally
studied corners of this science. The endeavor requires the
collaboration of all. The most explicit theories of meaning come from
philosophy and logic, but these cannot be straightforwardly applied to
natural languages. The most explicit and detailed theories of
grammatical structure come from linguistics; these deal well with
phrases and sentences, but cannot be directly applied to larger units
of discourse. Computer scientists can give detailed accounts of
programs, themselves large units of discourse, but the "sentences" out
of which programs are built exhibit far less complexity than those of
natural languages. Action has been studied in various ways by various
disciplines, but the action theories that are well-worked-out
mathematically -- like "choice theory" in economics -- are too simple
to capture real-life applications. And those that seem more promising
-- like Aristotle's theory of "practical reason" -- haven't been
developed to the point where they can really be applied. Logic and
psychology have quite a lot to tell us about inference and reasoning,
each in its different way, but this work has seldom been related to
natural language uses.
CSLI was founded by researchers who wish to work on more than the
corners. Their two and a half years of work together has firmly
committed them to a joint effort.
How does it work?
Since its inception, the Center has included a multitude of mechanisms
to promote formal and informal interaction, including weekly seminars
and colloquia, frequent project meetings, and daily teas. But the
nature of the interaction has changed over time. At first, the main
function was mutual education of a general sort. The researchers
wanted to learn about each others' opinions, methods, approaches,
biases, and experiences. They discovered differences in rather
general metatheoretical questions and in methodology, as well as in
specific issues. They discussed basic questions such as the nature of
evidence, the relationship between theory and formalisms, and the
nature of representation. For many, reading habits changed as well --
"keeping up" now meant reviewing recent research in several
disciplines.
During this time, CSLI was aglow with a multitude of ideas for
interdisciplinary collaboration, and each researcher was trying to
incorporate every one of them into his or her research. They were
tempted to spend all their research time in lively debates on
fundamental issues. It was exciting and draining. But choices had to
be made and some convergent paths selected.
In time, the interactions became more focussed, and new research
constellations were formed. The current research group on situated
automata, for example, is in part the result of a CSLI seminar
organized during the first winter to explore the idea that action
theory in contemporary analytical philosophy and planning research in
AI should have something to say to each other. Discussion focussed on
the assumption, central to most AI work, that the agent's relation to
the world is mediated by logical representations. Philosophers argued
that the assumption was groundless at best, absurd at worst, while
computer scientists argued that in rejecting the "representational"
approach, philosophers were not providing an equally detailed
alternative model for the causal connections between state changes in
agents and in the world. Out of this interaction came a new goal: to
give an account of an agent's place in the world that, on the one
hand, is as detailed and rigorous as the AI accounts, and, on the
other hand, does not start from an a priori assumption of a
representational connection.
CSLI's current research projects represent this sort of convergence of
theories and ideas. Most activities of mutual education are now
connected with the projects. However, the impact of the first two
years has not dissipated. Mechanisms are being put into place to
ensure that new connections are encouraged and strengthened, and the
respect CSLI has for individual differences ensures that vigorous
debates will continue into the foreseeable future.
What is it like to work at CSLI?
Each of the institutions and disciplines involved in CSLI has its own
character. A visitor or student will probably be spending a good bit
of time at the Ventura headquarters, where a sort of indigenous CSLI
culture has developed. Imagine a typical philosopher, a typical
linguist, and a typical computer scientist. The philosopher is happy
with low-key funky surroundings, and can't be bothered with machinery,
relying instead on books, paper, and number 2 pencils. The linguist
is accustomed to low-key funky surroundings, and is content in any
setting where there are other linguists, coffee, and devices
(blackboards, whiteboards, or computers) that can handle trees or
functional diagrams. The computer scientist has become part of the
wonderful new technology s/he has helped to develop, to the extent
that s/he can't even imagine how to communicate with the person at the
next desk when the computer is down.
All of these folk feel right at home at Ventura Hall. It is an old,
onetime residence on the Stanford campus with a carriage house in
back, trailers in the front yard, and flowers carefully planted amid
the freely growing weeds. Inside, there are Dandelions in every nook
and cranny, on one of which sits the marmalade cat Ciseli, enjoying
the warmth. It is no accident that Ventura accommodates all of these
types, for it arose from their shared vision and their need for an
"office away from the office" in which to do their collaborative
research.
What made CSLI possible?
o 40 researchers
o 5 academic disciplines
o 3 separate locations
o 3 different administrations
o 1 common research goal
combined with
o A large grant from the System Development Foundation
o Smaller grants from the National Science Foundation
o Equipment grants from Xerox Corporation and Digital Equipment
Corporation
o The generosity and vision of Stanford University, SRI
International, and Xerox PARC
What keeps it together?
o Commitment to a common goal
o A central administration woven around and through the site
administrations
o A dedicated support staff at all three sites
o Visiting scholars
o Postdoctoral fellows
o Graduate students
o Telephone wires and computer cables
(End of first part)
-------
∂18-Mar-86 1734 EMMA@SU-CSLI.ARPA Old Stanford phone numbers
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 18 Mar 86 17:33:41 PST
Date: Tue 18 Mar 86 16:20:26-PST
From: brad
Subject: Old Stanford phone numbers
Sender: EMMA@SU-CSLI.ARPA
To: friends@SU-CSLI.ARPA
Reply-To: horak@su-csli.arpa
Tel: 497-3479
Tel: 723-3561
All Stanford University 497 prefix phone numbers will be disconnected
this Friday, March 21 at around 6pm. The new Stanford 723 & 725
phone numbers should be used after this time.
Directory assistance for the new numbers can be reached by calling:
415-723-2300 University
415-723-4000 Hospital
415-723-0628 CSLI
--Brad
-------
-------
∂18-Mar-86 1821 EMMA@SU-CSLI.ARPA CSLI Monthly, part II
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 18 Mar 86 18:20:56 PST
Date: Tue 18 Mar 86 16:01:41-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, part II
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
(start of second part)
How do the present projects contribute to the common goal?
One schema for organizing our research activities is the following,
based roughly on sizes of information chunks:
o The nature of information, representation, and action
o Information and meaning in extended discourse
o Information and meaning in sentences
o Information and meaning in words
o Sources of information
As with any schema, this one is useful only as long as it's taken with
a grain of salt. It doesn't, for instance, imply an ordering on the
process of understanding information; it doesn't mean that information
is passed upwards, or from one level to its nearest neighbors; and it
certainly doesn't mean that each project is limited in its efforts or
sphere of influence by its position in the schema. However, any other
schema would be equally invalid, and this one does provide a framework
through which we can make use of parallelisms (present and desired)
between human and computer languages and point to these and other
links among our research projects.
THE NATURE OF INFORMATION, REPRESENTATION, AND ACTION
A full account of the content and transfer of information requires us
to embed theories of meaning and interpretation in the real world. A
first step is to understand how information about the world is
represented. The [Representation and Reasoning] project is developing
a general theory of representation and modelling that will
characterize a variety of representational systems including
sentences, utterances, parse-trees, computer screens, minds, and
computers. The goal is to build the foundations of a theory of
computation that can explain what it is to process, rather than merely
to carry, information.
The group considers the following properties of representation
essential:
o Representation is a more restrictive notion than information,
but a broader one than language. (Representation includes
photographs and other physical simulations, such as model
airplanes, and also uses of non-linguistic symbols like
numbers to represent distances and sets to represent
meanings.)
o Representation is circumstantially dependent, not only because
it is specifically relational, but also because whether A
represents B depends, in general, on the whole context
in which A and B appear.
o There is no reason to suppose that representation is "formal";
it emerges out of partially disconnected physically
embodied systems or processes.
o It matters that "represent" is a verb. Representational
acts are the primary objects of study, and
representational structures, especially those requiring an
independent act of interpretation, are taken as derivative.
Currently, the research falls into these subprojects: developing a
typology of "correspondence" relations that can hold between A and B
if A represents B; analyzing the philosophical foundations of the
representational act; examining the notion of models and modelling (a
species of representation) with particular reference to their use in
the model-theoretic approach to semantics; and examining the
representational foundations of computation and information
processing.
Acts of communication do not occur in a vacuum but among a host of
activities, including other acts of communication. In addition, the
communication often refers to other situations and assumes a certain
state of mind on the part of the receiver. The [Situation Theory and
Situation Semantics] project is a coordinated effort, both to develop
a unified theory of meaning and information content that makes use of
all of these activities and assumptions, and to apply that theory to
specific problems that have arisen within the disciplines of
philosophy, linguistics, computer science, and artificial
intelligence. The guiding idea behind the formation of this group was
to use provisional versions of situation theory to give detailed
analyses of the semantics of natural and computer languages, both to
hone intuitions about the information theoretic structures required by
such analyses, and to provide more constraining criteria of adequacy
on theories of such structures. The aim is to reach the point where
these intuitions and criteria become precise enough to provide the
basis of a mathematically rigorous, axiomatic theory of information
content.
The group has five overlapping semigroups working on concrete
instances of some traditional problems associated with attempts to
develop theories of information content: developing an
information-based theory of inference, developing an information-based
theory of representation, examining problems in the semantics of
natural languages, examining problems in the semantics of computation,
and axiomatizing and modeling situation theory.
This group includes members from every discipline and every
institution represented at CSLI. They rely on their diverse
backgrounds to draw on insights and techniques from all parts of CSLI
in solving the problems they have set for themselves, and they hope
their progress will similarly affect work in the other projects.
The [Situated Automata] project is concerned with the analysis of
dynamic informational properties of computational systems embedded in
larger environments, especially physical environments. The theory
takes as its point of departure a model of physical and computational
systems in which the concept of information is defined in terms of
logical relationships between the state of a process (e.g., a machine)
and that of its surrounding world. Because of constraints between a
process and its environment, not every state of the process-
environment pair is possible, in general. A process x is said to
carry the information that p in a situation where its internal state
is v if p holds in all situations in which x is in state v.
This definition leads directly to models for certain well-known logics
of knowledge. More interestingly, perhaps, it also suggests synthetic
approaches to the design of dynamic information systems.
In order to deal with the enormous number of states typically
encountered in realistic systems, the theory is being extended to
hierarchically constructed machines, the informational characteristics
of which can be rigorously derived in a compositional fashion from
those of its component machines. Theoretical work is also being done
to relate this work to abstract models of concurrent processes.
On the more practical level, the situated automata project has been
developing tools for constructing complex machines with well-defined
informational properties, and has been testing the theory by applying
these tools to software design for robots and other reactive systems.
Planned future work includes applying the situated automata framework
to the analysis of dynamic informational properties of systems engaged
in linguistic interaction.
Although still in the early stages, it appears that the theory will
make a technical contribution to the ongoing debate in AI and
philosophy of mind over the role of interpreted representations
("language of thought") in the semanticity or intensionality of mental
states. The situated automata account indicates how logical
conditions can be assigned systematically to arbitrary computational
states that are not prestructured as interpretable linguistic
entities, and thus it serves as at least prima facie evidence against
the need for a language of thought in order to achieve full
semanticity.
In the [Rational Agency] project, philosophers and researchers in AI
are merging their two traditions in the study of rational behavior to
build a theory of belief, desire, and intention as these attitudes act
collectively, informed by perception, to produce action. They seek
models that take account of the resource limitations of humans and
computers, and formal, automatable theories that can be used to endow
artificial agents with the requisite commonsense reasoning abilities.
They are investigating ways by which planning will fit into their
theory of rationality, e.g., can plans be reduced to some configuration
of other, primitive mental states, or must they also be introduced as
a primitive? Finally, because a main function of planning is the
coordination of an agent's own projects and of interpersonal
activities, they require their theories to account for multiagent
interaction.
Recent developments in philosophy of action have moved beyond the
"belief/desire" architecture and have provided insights about the
nature of "intention formation" and its function as a mechanism
required by a resource-bounded agent in evaluating and making
decisions in a constantly changing world. Recent developments in AI
planning theory have moved beyond a view of plans as sets of actions
for achieving predetermined goals that are guaranteed consistent, and
have provided insights into the nature of intention realization.
Researchers in the Rational Agency project are bringing about a
convergence of these two developments and are looking to it as the
cornerstone of their future work.
The [Semantics of Computer Languages] project is seeking to develop a
theory of semantics of computational languages through the design of a
specific family of languages for system description and development.
The theory will serve as the basis for a variety of constructed
languages for describing, analyzing, and designing real world
situations and systems. It will account for a number of issues that
have not been adequately dealt with, either in work on natural
language semantics, or the semantics of programming languages. For
example, in describing any complex real-world situation, people mix
descriptions at different levels of abstraction and detail. They use
generalization, composition, idealization, analogy, and other
"higher-level" descriptions to simplify in some way the account that
is needed at a "lower" or more detailed level. In working with
programming and specification languages, there is a semantic
discontinuity in moving from one abstraction or approximation to
another. In simple cases there can be a clear mapping, but there is
no theory to deal adequately with more general cases occurring in
either natural language or computing languages.
Similarly, much of the work on computing languages has dealt with the
computer in a mathematical domain of inputs and outputs, ignoring its
embodiment as a physical process. This abstraction is not adequate
for many of the phenomena of real computing such as the temporal,
spatial, and causal constraints that can be described among the
components of physical systems.
The research strategy of this group is to interweave three levels of
research: theory, experiments, and environments. The group is
experimenting with a class of languages called "system description
languages" which share some properties with programming languages, but
have a semantics more in the tradition of model theory and work on
natural languages. Finally, to provide the ease and flexibility they
need for experimenting with description languages, the group is
developing an environment that is a tool kit for designing and working
with formal languages.
Researchers in the closely related [Embedded Computation] project wish
to understand how the local processing constraints, physical
embodiment, and real-time activity of a computer or other
computational system interact with the relational constraints of
representing and conveying information and language. They wish to
account for these interactions in information processing systems that
range in complexity from those with perceptual mechanisms connected
rather directly to their environment such as thermostats and the
sensory transducers of humans, to those able to use language, reason
deliberately, and reflect in a detached way about situations remote in
space, time, or possibility.
Members of the project are searching for system architectures and
theoretical techniques that can adequately analyze this range of
capacities. For example, they wish to account for the full range of
semantic relations between the processes and the embedding context and
to give a semantic analysis focussed on activity and processing.
Currently, they are formulating type theories able to deal with both
procedural and declarative information, developing a theoretical
framework for a full semantical analysis of a simple robot, and
working with CSLI's Situation Theory and Situation Semantics group to
design an "inference engine" for situation theory.
The [Analysis of Graphical Representation] project is concerned with
developing an account of the document as an information-bearing
artifact, a topic which until now has been largely neglected by many
of the fields that count language among their subject matter. Issues
include: the relationship between the concepts of "text" and
"document", an analysis of systems of graphical morphology, and the
nature of writing in relation to representational systems in general.
This project is listed in this section because of its emphasis on
representation and information but could have as easily been listed in
the next section because of its concern for written language as
expression of connected discourse.
(end of second part)
-------
∂18-Mar-86 1913 EMMA@SU-CSLI.ARPA CSLI Monthly, part III
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 18 Mar 86 19:13:01 PST
Date: Tue 18 Mar 86 16:02:54-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, part III
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
(start of third part)
INFORMATION AND MEANING IN EXTENDED DISCOURSE
The information content of a statement is only partially determined by
the sentence used. Other tools for interpretation come from the
discourse as a whole, the context of the discourse, and the
states-of-mind of the participating agents.
Members of [Discourse, Intention, and Action] are developing various
components of theories of discourse, emphasizing the use of extended
sequences of utterances to achieve particular effects and the fact
that discourse is an activity of two (or more) participants located in
particular contexts. They are extending the kind of semantic accounts
often given to natural languages in two directions: first, by
accounting for some non-declarative utterances, particularly
interrogatives and imperatives, and second, by dealing with discourses
containing several utterances, possibly produced by several speakers.
The first objective is to be achieved by considering utterances as not
merely describing a situation (type), but as effecting a change in the
mental state of the participants; the second, by studying the
constraints on utterance sequences of the goals of the participants,
the discourse situation, commonsense knowledge, and human attentional
and processing capabilities.
The project is proceeding along three intertwined areas of
investigation:
o Discourse. Research on the nature of discourse includes a study
of the components of discourse structure, the nature of coherence
relations, the derivation of discourse as a product of rational
interaction, and embedded discourse. Another concern is how
patterns in word order and intonation correlate with structure
at the discourse level.
o Sentence-level phenomena. This subproject examines questions of
illocution from the perspective of a theory of rational interaction.
It is concerned with the contribution of utterance mood to such a
theory, with illocutionary act definitions, with indirect speech acts,
and with a theory that can determine what is implicated in an
utterance.
o Subutterance phenomena. In this area, the group is examining the
relation between referring expressions (including indexicals,
pronouns, and descriptions) and speakers' and hearers' beliefs, mutual
beliefs, and intentions.
In thinking about how to make computer languages more like natural
languages, it is useful to view computer programs as examples of
extended discourse. [Linguistic Approaches to Computer Languages] is
a pilot project to investigate the application of methods and findings
from research on natural languages to the design and description of
high-level computer languages. The linguistically interesting
approach to making computer languages resemble natural languages is
not to graft English words or phrases onto the computer language in a
superficial way, but rather to exploit the rich inventory of encoding
strategies that have developed during the evolution of natural
language and that humans appear especially attuned to. The increasing
complexity of computer languages, current progress in formal
linguistics, and the growing importance of ergonomic factors in
computer language design motivate a combined effort between computer
science and linguistics.
Long-term issues in the emerging field will include: temporal
expressions in the communication among parallel processes, the use of
speech acts in message-passing between objects and processors, and the
use of discourse information to support ellipsis.
Currently, the group is investigating the need for and feasibility of
applying linguistic approaches, techniques, and findings to a set of
sample problems:
o The use of partially free word order among the arguments of
functions to allow flexibility in the order of evaluation and to
eliminate the need for the user to memorize arbitrary argument
orders. This requires disambiguation by sort, type, or special
marking.
o The exploitation of parallels between natural language parsing
schemes, based on complex structured representations and type
inference in polymorphically typed computer languages.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms in computer
languages than currently exist.
The goal of the [Grammatical Theory and Discourse Structure] project
is to integrate a particular theory of grammar, the lexical-functional
theory (LFG), with a theory of discourse structure, relating the
project equally to this and the following section. LFG, as a very
explicit and highly modular theory, provides a useful framework from
which to study the interaction between discourse and sentence
phenomena. Moreover, the general architecture of the framework allows
experimentation with different modes of interaction between different
components. Linguistic models up to now, LFG included, have displayed
a marked preference for the serial approach. However, there is no
need for the components of grammars built on unification to interact
in a serial rather than a more parallel fashion. The different
subcomponents can constrain the output without being in linear order.
Current work is advancing in the form of two subprojects: the first is
extending the ideas in Discourse Representation Theory and Situation
Semantics to a richer theory of anaphora and deixis, to account for
such phenomena as logophoric reference, topic, and focus; and the
second is studying the grammaticalization (the way that phenomena are
formally and systematically encoded in the grammars of natural
languages) of such discourse phenomena as logophoricity, topic, and
focus in natural languages, in order to recover from the formal
subsystems of word structure, word order, and prosodic structure a
rich set of empirical constraints on the integrated theory.
INFORMATION AND MEANING IN SENTENCES
Two closely connected projects are looking at representations of
sentence structure from the point of view of several formalisms; they
are searching for commonalities with respect to meaning and
interpretation. One seeks a conceptual foundation for the theories,
and the other seeks representations with direct ties to the semantics.
Specifically, the goal of the [Foundations of Grammar] project is a
better understanding of methods of encoding linguistic information as
systems of rules or constraints, and of how that information can be
used in recognition and generation. The group is developing, not a
particular theory of grammar, but rather a conceptual foundation and a
common frame of reference for such theories. Their current research
involves three efforts which are being carried out in tandem: the
development of a mathematical characterization of techniques of
grammatical description, the study of their computational
ramifications, and an examination of their empirical motivations.
The group is incorporating their results in a computational tool kit
for implementing grammatical theories, and the result will be a
facility for experimentation with various syntactic, semantic, and
morphological theories and processing strategies.
This focus on the common conceptual basis of current linguistic
theories and the design of formal and computational techniques to
further their development will contribute to our understanding of the
relationship between language and information. The research is
concerned, on the one hand, with the ways in which information about
the world is represented in linguistic structures and the
computational techniques for extracting and storing that information,
and, on the other hand, with the way information about language itself
is represented in grammars and how that information is used in
generation and parsing.
The [Head-Driven Phrase Structure Grammar] project is analyzing the
structure and interpretation of natural language within the HPSG
framework which incorporates theoretical and analytic concepts from
Generalized Phrase Structure Grammar, Lexical Functional Grammar,
Situation Semantics, Categorial Grammar, and Functional Unification
grammar. The goal is a single-level, constraint-based
characterization of linguistic structures, rules, and principles which
interact through the operation of unification.
Current research is addressing such issues as the analysis of the
syntax and semantics of unbounded dependency constructions,
hierarchical, frame-based models of the structure of the lexicon,
constraints governing the interface of syntactic and semantic
structure, word order variation, discontinuous grammatical
dependencies, agreement and incorporation phenomena in a variety of
languages, the theory of lexical rules, and approaches to semantic
structure that synthesize ideas from Situation Semantics, the theory
of thematic roles, and Discourse Representation Theory. The HPSG
research group is also developing various computational
implementations, in close consultation with ongoing research in the
Foundations of Grammar project.
(end of third part)
-------
∂18-Mar-86 2000 EMMA@SU-CSLI.ARPA CSLI Monthly, part IV
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 18 Mar 86 19:56:05 PST
Date: Tue 18 Mar 86 16:04:05-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, part IV
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
(start of fourth part)
INFORMATION AND MEANING IN WORDS
Two projects are exploring the structure of information in the lexicon
and its relation to larger units of communication.
The goal of the [Lexical Project] is to develop a workable lexicon
that integrates semantic knowledge about lexical items with the
semantic and syntactic frameworks currently under development at CSLI.
The group has sorted its task into a linguistic problem and a
computational problem: the linguistic problem is to determine what the
content of a lexical entry must be, and the computational problem is
to understand how this knowledge can be built into an online lexicon.
Currently, they are addressing four issues connected with the
linguistic problem:
o How do knowledge of the world and lexical meaning link up?
o How should lexical meaning be represented?
o What is the place of lexico-semantic information in the overall
grammar?
o What is the structure of the lexicon?
Although conceptually, the computational problem cannot be solved
without first solving the linguistic problem, the group is addressing
the computational problem simultaneously in an effort to avoid
piecemeal, limited, or unimplementable solutions to the linguistic
problem.
The [AFT Lexical Representation Theory] project is developing three
basic parts of Aitiational Frame Theory, a theory of lexical
representation which gives a rich internal structure to lexical
meanings and is designed to feed into generative syntactic
representations.
The first part says that meanings are only partially specifying
instructions about the referent of the terms. The second concerns the
unification of AFT representations of the meanings of terms joined by
conjunction or disjunction to form complex predicates. The third part
concerns the path from intension to extension; according to AFT, the
meaning specification, together with certain assumptions about human
explanatory schemes, generates a number of contexts, and the extension
is determined only within such contexts.
SOURCES OF INFORMATION
For human agents, speech and vision are the primary sources of
information about the world, and we expect similar mechanisms to
accommodate our communication with computers. Three projects at CSLI
are concerned with representing and characterizing information
contained in speech signals and with relating this information to
other aspects of the communication process. A fourth is exploring
comparable aspects of visual information.
The [Phonology and Phonetics] project is investigating the
organization of phonology and its role in language structure, with
particular emphasis on postlexical phonology. The work involves an
investigation of two orthogonal aspects of the organization of the
phonology:
o The divisions between the lexical phonology, the
postlexical phonology, and the phonetics
o The ways in which each of these levels interacts
with syntactic, semantic, and discourse factors
The group ties itself to the representational and semantic aspects of
CSLI's work by assuming that phonetics interprets phonology in much
the same way as semantics interprets syntax, and that the study of
interactions between the phonology and syntax, semantics, and
discourse will constrain the theories of these other components.
The research will suggest ways of incorporating cues to meaning from
the phonological (particularly intonational) realization into natural
language parsing and understanding systems. For example, such an
apparently purely mechanical articulatory phenomenon as the elision in
"Bill is here" --> "Bill's here" is systematically blocked when a
syntactic gap (inaudible in itself) follows: "My dad is stronger than
Bill is" cannot be reduced to "My dad is stronger than Bill's" (which
means something quite different). Even subphonemic differences in
timing, speech rhythm, and syllabification are known to correlate
systematically with semantic interpretation. The group's hypothesis
is that the phonological interpretation of utterances takes place
incrementally within each component of the grammar. Thus, the rules
of word phonology apply in tandem with the rules of morphology in the
lexicon, and sentences formed in the syntax are in turn subject to
postlexical phonological processes.
The [Finite State Morphology] project is bringing a new kind of
dialogue between linguists and computer scientists to CSLI. Until
recently, descriptive and theoretical work in phonology and morphology
has proceeded without parallel mathematical and computational efforts.
In spite of the lively debate on the relative roles of rules,
constraints, and representations in recent years, there has been
relatively little careful formalization of these new theories and few
studies of their mathematical properties. Moreover, there have been
very few attempts to apply these ideas towards generation or
recognition of words.
Finite State Morphology is a framework within computational morphology
which uses finite state devices to represent correspondences between
lexical and surface representations of morphemes. CSLI's FSM group is
working within this framework to:
o Study mathematical properties of phonological rule systems
o Develop an automatic compiler for phonological rules
o Suggest improvements to current methods of handling
morphosyntax
o Attempt to resolve the issues where there is a conflict between
finite state approaches and current phonological theory
o Implement a model for multi-tier phonological descriptions and
hierarchical structures
The goal of the project on [Computational Models of Spoken Language]
is to formally specify, through computational models, the information
projected from speech signals and how that information is represented
and used in speech analysis.
Their point of departure is an exploration of two related hypotheses:
1) that we hear more than we make sense of, that is, that we actively
discard information, and 2) that we add information to that which is
present in the signal, that is, that we fill in what is not there.
The group hopes that their computational exploration of ordinary
speech will lead to a deeper understanding of the nature of
information transference.
Assuming some form of computational processing of internal
representations is controversial both at CSLI and in the general
scientific community. The group is seeking to add some content to
this debate in the form of data and facts regarding the nature of the
speech signal, what must be projected from the signal and what is
judged to be nonlinguistic, and what constitutes the necessary
components in recognizing and parsing the spoken utterance.
Currently, they are investigating four facets of this problem:
symbolic and physicalist analyses of continuous speech, properties of
representations of the English word, properties of representations of
the English phrase, and speech and parsing theory.
The [Visual Communication] project is concerned with mechanisms of visual
communication and visual languages and the identification of visual
regularities that support the distinctions and classes necessary for
general-purpose reasoning. The group assumes that the manner in which
visual languages convey meaning, is, at least in part, fundamentally
different from conventions in spoken language, and, therefore, requires
study beyond the confines of the standard linguistic tradition. They are
testing this hypothesis by examining conventions that have evolved for
various forms of visual communication including visual languages such as
ASL, illustrations, blackboard interactions, and graphic interfaces.
They seek to provide some perceptual underpinnings to theories of meaning
and information through an understanding of the way we parse the world
into meaningful parts ("visual morphemes") and the way we identify those
parts from sensory data.
--Elizabeth Macken
Editor
(end of CSLI Monthly)
-------
∂19-Mar-86 1705 EMMA@SU-CSLI.ARPA Calendar, March 20, No. 8
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 19 Mar 86 17:04:05 PST
Date: Wed 19 Mar 86 16:50:31-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, March 20, No. 8
To: friends@SU-CSLI.ARPA
Tel: 497-3479
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
March 20, 1986 Stanford Vol. 1, No. 8
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, March 20, 1986
12 noon TINLunch
Ventura Hall Models, Metaphysics and the Vagaries of Empiricism
Conference Room by Marx W. Wartofsky
Discussion led by Ivan Blair (Blair@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall The Structural Meaning of Clause Type: Capturing
Trailer Classroom Cross-modal and Cross-linguistic Generalizations
Dietmar Zaefferer (G.Zaeff@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium this week
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, March 27, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Reflexivisation: Some Connections Between
Trailer Classroom Lexical, Syntactic, and Semantic Representation
Annie Zaenen, Peter Sells, Draga Zec
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium this week
--------------
!
Page 2 CSLI Calendar March 20, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S SEMINAR
Reflexivisation:
Some Connections Between
Lexical, Syntactic, and Semantic Representation
Annie Zaenen, Peter Sells, Draga Zec
(Zaenen.pa@xerox, Sells@su-csli, Zec@su-csli)
This presentation will concentrate on cross-linguistic variation in
the expression of simple direct object reflexivisation (as found in
English in a sentence like `John washed himself'). It will be shown
that the counterparts of such sentences in different languages can be
lexically transitive or intransitive, can be expressed in one word or
in two or three, and allow for one or more semantic interpretations
requiring semantic representations that treat the reflexive as a bound
variable in some cases but not in others. The data presented will show
that some simple ideas about the mapping from lexical arguments to
surface structure constituents and/or to semantic arguments are not
tenable.
--------------
PIXELS AND PREDICATES MEETING
A Data-Flow Environment for an Interactive Graphics
Paul Haeberli, Silicon Graphics Inc.
1:00 p.m., Wednesday, March 26, Ventura trailers
Multiple windows are a common feature of contemporary interactive
programming and application environments, but facilities for
communicating data between windows have been limited. Operating
system extensions are described that allow programs to be combined in
a flexible way. A data-flow manager is introduced to control the flow
of data between concurrent processes. This system allows the
interconnection of processes to be changed interactively, and places
no limitations on the structure of process interconnection. As a
result, this environment encourages creation of simple, modular
graphics tools that work well together.
A video tape of the system will be shown during the talk; there will
be a demo afterwards on an IRIS workstation.
-------
∂26-Mar-86 1746 EMMA@SU-CSLI.ARPA Calendar, March 27, No. 9
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 26 Mar 86 17:46:39 PST
Date: Wed 26 Mar 86 16:52:01-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, March 27, No. 9
To: friends@SU-CSLI.ARPA
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
March 27, 1986 Stanford Vol. 1, No. 9
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, March 27, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Reflexivisation: Some Connections Between
Trailer Classroom Lexical, Syntactic, and Semantic Representation
Annie Zaenen, Peter Sells, Draga Zec
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
No Colloquium this week
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, April 3, 1986
12 noon TINLunch
Ventura Hall Semantics and Property Theory
Conference Room by Gennaro Chierchia and Raymond Turner
Discussion led by Chris Menzel (chris@su-csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Representation (part 1 of 4)
Trailer Classroom Brian Smith, Jon Barwise, John Etchemendy,
Ken Olson, John Perry (Briansmith.pa@xerox)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Modelling Concurrency with Partial Orders
Trailer Classroom V. R. Pratt, Stanford University
(Abstract on page 2)
--------------
!
Page 2 CSLI Calendar March 27, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
Semantics and Property Theory
by Gennaro Chierchia and Raymond Turner
Discussion led by Chris Menzel (chris@su-csli)
Following Frege, Chierchia and Turner argue that properties play
two metaphysical roles. In one role, they are ``unsaturated''
predicative entities, the semantic counterparts of predicate
expressions in natural language (e.g., ``is running''). In the other,
they are full-fledged ``complete'' individuals, the semantic
counterparts of singular terms (e.g., ``to run'', or ``running''). In
this paper, the authors develop a first-order theory of properties
which incorporates this insight, and which they argue is better suited
to the semantics of natural language than any currently existing
alternative. In this TINLunch, I will sketch the theory informally,
then we will discuss its philosophical foundations, and examine the
evidence the authors' adduce for its superiority as a logical
foundation for semantic theory.
--------------
NEXT WEEK'S SEMINAR
Representation
Brian Smith, Jon Barwise, John Etchemendy, Ken Olson, John Perry
April 3, 10, 17, and 24
Issues of representation permeate CSLI research, often in implicit
ways. This four-part series will examine representation as a subject
matter in its own right, and will explore various representational
issues that relate to mind, computation, and semantics.
--------------
NEXT WEEK'S COLLOQUIUM
Modelling Concurrency with Partial Orders
V. R. Pratt, Stanford University
We describe a simple and uniform view of concurrent processes that
accounts for such phenomena of information systems as various kinds of
concurrency, multiparty communication, mixed analog and digital
information, continuous and discrete time and space, the dataflow
concept, and hierarchical organization of systems. The model is based
on a notion of process as a set of partial strings or partially
ordered multisets (pomsets). Such processes form an algebra whose
main operations are sums and products, Boolean operations, and process
homomorphisms. By regarding pomsets as partial strings we make a
connection with formal language theory, and by regarding them as
algebraic structures we make connections with (the models of)
first-order logic and temporal logic. These connections are helpful
for comparisons between language-based and logic-based accounts of
concurrent systems.
!
Page 3 CSLI Calendar March 27, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SYSTEM DESCRIPTION AND DEVELOPMENT TALK
Kim Halskov Madsen (Madsen@su-csli)
Department of Computer Science, Aarhus, Denmark
Monday, March 31, 12:15, Ventura Hall conference room
This seminar is on professional language and the use of computers.
Empirical investigations on the professional language of librarians
have been made with the following observations 1) Metaphors are
used intensively 2) Concepts from the screen images have entered the
professional language 3) Concepts from the computer profession have
been assimilated by the librarians. A theory of professional language
is of importance when designing computer systems. A tentative theory
could approach issues, such as
. Different situations of language use
. Context dependency
. Change of language
-------
∂02-Apr-86 1752 EMMA@SU-CSLI.ARPA Calendar, April 3, No. 10
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Apr 86 17:41:47 PST
Date: Wed 2 Apr 86 17:33:25-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, April 3, No. 10
To: friends@SU-CSLI.ARPA
Tel: 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
April 3, 1986 Stanford Vol. 1, No. 10
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, April 3, 1986
12 noon TINLunch
Ventura Hall Semantics and Property Theory
Conference Room by Gennaro Chierchia and Raymond Turner
Discussion led by Chris Menzel (chris@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall Representation: Categories of Correspondence
Trailer Classroom Brian Smith (Briansmith.pa@xerox)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Modelling Concurrency with Partial Orders
Trailer Classroom V. R. Pratt, Stanford University
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, April 10, 1986
12 noon TINLunch
Ventura Hall Interpreted Syntax
Conference Room by Susan Stucky
Discussion led by Mats Rooth (Rooth@su-csli)
(Abstract on page 3)
2:15 p.m. CSLI Seminar
Ventura Hall Representation: Foundations of Representation
Trailer Classroom Ken Olson (Olson@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Information Flow in the Design and Production of
Trailer Classroom Printers' Type: Problems of Computerizing a
Traditional Craft
Richard Southall
(Abstract on page 4)
--------------
!
Page 2 CSLI Calendar April 3, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SEMINAR SERIES
``Mini-Series'' on Representation
Brian Smith, Jon Barwise, John Etchemendy, Ken Olson, John Perry
April 3, 10, 17, and 24
Issues of representation permeate CSLI research. During April, a
series of 4 seminars will be presented that focus on various aspects
of representation, and on its relation to computation, semantics, and
mind.
1. April 3, ``Categories of Correspondence'' -- Brian Smith
An introduction to the series, a survey of the various ways in
which representation plays a role in our research, and a sketch of a
typology of the various kinds of ``correspondence'' relation that can
hold between A and B, when A represents B (abstract below).
2. April 10, ``Foundations of Representation'' -- Ken Olson
A discussion of some of the philosophical foundations of
representation---particularly `acts' of representation---and its
relation to metaphysics and ontology.
3. April 17, ``On Stitch's Case Against Belief'' -- John Perry
An analysis of the case Steven Stitch makes against belief and other
notions of folk psychology, including a critique of the conception of
representation Stitch employs.
4. April 23, ``Models, Modelling, and Model Theory'' -- John
Etchemendy and Jon Barwise.
An examination of the notion of models and modelling, viewed as a
species of representation, with specific reference to their use in the
model-theoretic approach to semantics.
An abstract of the first seminar appears below.
--------------
THIS WEEK'S SEMINAR
Categories of Correspondence
Brian C. Smith (Briansmith.pa@xerox)
Photographs, sentences, balsa airplane models, images on computer
screens, Turing machine quadruples, architectural blueprints,
set-theoretic models of meaning and content, maps, parse trees in
linguistics, and so on and so forth, are all representations---
complex, structured objects that somehow stand for or correspond to
some other object or situation (or, if you prefer, are `taken by an
interpreter' to stand for or correspond to that represented
situation). It is important, in trying to make sense of
representation more generally, to identify the ways in which the
structure or composition of a representation can be used to signify or
indicate what it represents.
!
Page 3 CSLI Calendar April 3, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Strikingly, received theoretical practice has no vocabulary for
such relations. On the contrary, standard approaches generally fall
into one of two camps: those (like model-theory, abstract data types,
and category theory) that identify two objects when they are roughly
isomorphic, and those (like formal semantics) that take the
``designation'' relation---presumably a specific kind of
representation---to be strictly non-transitive. The latter view is
manifested, for example, in the strict hierarchies of meta-languages,
the notion of a ``use/mention'' confusion, etc. Unfortunately, the
first of these approaches is too coarse-grained for our purposes,
ignoring many representational details important for computation and
comprehension, while the latter is untenably rigid---far too strict to
cope with representational practice. A photographic copy of a
photograph of a sailboat, for example, can sometimes serve perfectly
well as a photo of the sailboat. Similarly, it would be pedantic to
deny, on the grounds of use/mention hygiene, that the visual
representation `12' on a computer screen `must not be taken to
represent a number,' but rather viewed as representing a data
structure that in turn represents a number. And yet there are clearly
times when the latter reading is to be preferred. In practice,
representational relations, from the simplest to the most complex, can
sometimes be composed, sometimes not. How does this all work?
Our approach starts very simply, identifying the structural
relations that obtain between two domains when objects of one are used
to correspond to objects of the other. For example, we call a
representation `iconic' when its objects, properties, and relations
correspond, respectively, to objects, properties, and relations in the
represented domain. Similarly, a representation is said to `absorb'
anything that represents itself. Thus the grammar rule `EXP ->
OP(EXP1,EXP2)', for a formal language of arithmetic, absorbs
left-to-right adjacency; model-theoretic accounts of truth typically
absorb negation; etc. A representation is said to `reify' any
property or relation that it represents with an object. Thus
first-order logic reifies the predicates in the semantic domain, since
they are represented by (instances of) objects---i.e., predicate
letters---in the representation. A representation is called `polar'
when it represents a presence by an absence, or vice versa, as for
example when the presence of a room key at the hotel desk is taken to
signify the client's absence. By developing and extending a typology
of this sort, we aim to categorize representation relations of a wide
variety, and to understand their composition, their use in inference
and computation.
--------------
NEXT WEEK'S TINLUNCH
Interpreted Syntax
by Susan Stucky
discussion led by Mats Rooth (Rooth@su-csli)
There are fundamentally semantic representation relations holding
between a linguist's representations and the objects and properties in
language they represent. Furthermore, theoretical linguistics,
because of its empirical nature, requires that the representation
relation be made explicit and that certain of its representations be
grounded. Providing a mathematical specification of the formalism is
not enough: mathematical structures themselves must be interpreted.
--Susan Stucky
!
Page 4 CSLI Calendar April 3, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S COLLOQUIUM
Information Flow in the Design and Production of Printers' Type:
Problems of Computerizing a Traditional Craft
Richard Southall
In traditional type manufacture, it has been the task of the type
designer to conceive shapes for the characters of a typeface that have
certain combinations of stylistic and functional visual attributes,
and the task of the font producer to make objects that give rise via
the printing process to marks that yield satisfactory realizations of
the attributes conceived by the designer when a reader sees them.
Efficient communication of the type designer's wishes and intentions
to the font producer has thus been crucial to the success of type
production by traditional methods.
In present-day type manufacturing technology, the role of the font
producer is taken by a computer while that of the designer is still
played by a human. The consequent problems of communication between
the two make it worthwhile to take a harder look at the traditional
process of type design, with the aim of identifying the kind of
information that needs to be conveyed between designer and producer
and the kind of means that can be used to convey it.
(Richard Southall, typographer and typedesigner, has been a Visiting
Professor in the Computer Science Department at Stanford. He has
worked extensively with and lectured on TeX and Metafont.)
--------------
TANNER LECTURES ON HUMAN VALUES
Professor Stanley Cavell, Harvard University
sponsored by the Philosophy Department
The Uncanniness of the Ordinary
Thursday, April 3, 8 p.m., Kresge Auditorium
Scepticism, Melodrama, and the Extraordinary:
the Unknown Woman in GASLIGHT
Tuesday, April 8, 8 p.m., Kresge Auditorium
-------
∂04-Apr-86 0911 EMMA@SU-CSLI.ARPA CSLI: Late Announcement
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Apr 86 09:05:17 PST
Date: Fri 4 Apr 86 09:02:11-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI: Late Announcement
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
The following is a late announcement.
---------------
Seminar in Logic and Foundations of Mathematics
Speaker: Gordon Plotkin, Computer Science, Edinburgh University
Title: Some exercises in Frege structures
Time: Tuesday, April 8, 4:15-5:30
Place: 3d Floor, Mathematics Dept. Lounge 383N, Stanford University
S. Feferman (sf@su-csli.arpa)
-------
-------
∂09-Apr-86 1722 EMMA@SU-CSLI.ARPA Calendar, April 10, No. 11
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 9 Apr 86 17:22:31 PST
Date: Wed 9 Apr 86 17:07:58-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, April 10, No. 11
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
April 10, 1986 Stanford Vol. 1, No. 11
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, April 10, 1986
12 noon TINLunch
Ventura Hall Interpreted Syntax
Conference Room by Susan Stucky
Discussion led by Mats Rooth (Rooth@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall Representation: Foundations of Representation
Trailer Classroom Ken Olson (Olson.pa@xerox)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Turing Auditorium Information Flow in the Design and Production of
Printers' Type: Problems of Computerizing a
Traditional Craft
Richard Southall
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, April 17, 1986
12 noon TINLunch
Ventura Hall Understanding Computers and Cognition
Conference Room by Terry Winograd and Fernando Flores
Discussion led by Brian Smith (Briansmith.pa@xerox)
(abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Representation: On Stitch's Case Against Belief
Trailer Classroom John Perry (John@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Turing Auditorium Intention, Belief and Practical Reasoning
Hector-Neri Castaneda, Indiana University
(Abstract on page 2)
--------------
ANNOUNCEMENT
Please note that the colloquia for this week and next week are both in
Turing Auditorium.
!
Page 2 CSLI Calendar April 10, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S SEMINAR
Representation: Foundations of Representation
Ken Olson (Olson.pa@xerox)
What is it for a thing to represent another? Answers that rely in
any simple way on resemblance and causality are easily dismissed.
Peirce thought that representation was an irreducibly three-place
relation between a sign, an object, and what he called an
interpretant. But while Peirce's view has much to recommend it, the
notion of an interpretant seems to introduce an unwelcome mentalistic
element. At least it is unwelcome if we wish to account for mental
representation as one species of the more general notion instead of
giving it privileged status. I claim, however, that the notion of
interpretant does not presuppose a full-fledged mind. Other ideas of
Peirce's also deserve attention. Situation theory may finally be the
proper medium in which to realize his goal of a general theory of
signs.
--------------
NEXT WEEK'S TINLUNCH
Understanding Computers and Cognition
by Terry Winograd
Discussion led by Brian Smith (Briansmith.pa@xerox)
For some time, Terry Winograd has believed that the general semantical
and theoretical approaches embodied in current AI systems are
inadequate for dealing with human language and thought. What
distinguishes his views from those of various other AI critics is the
scope of what he takes to be the problem. In particular, as he argues
in his new book, he is convinced that that nothing within what he
calls the ``rationalistic tradition''---in which he would presumably
include most CSLI research---will overcome these inherent limitations.
In this TINLunch we will discuss the argument presented in the
book, try to separate the various threads that lead to Terry's
conclusion, and assess its relevance to the CSLI research program.
(The book, which is not difficult to read, should be available at
local bookstores; some selected portions will be made available in the
usual places.)
--------------
NEXT WEEK'S COLLOQUIUM
Intention, Belief and Practical Reasoning
Hector-Neri Castaneda, Indiana University
There is a special element in the representation of intentions that
is not present in the representation of belief. This element is
fundamental and characteristic of the practical contents of thinking.
This element is essentially involved in volition and the causation of
intentional action. Any AI representation of intentional action
should include this special element.
--------------
LOGIC SEMINAR
Varieties of Algebras of Complexes
Prof. Robert Goldblatt, University of New Zealand
Tuesday, April 15, 4:15-5:30
Math. Dept. 3d floor lounge (383 N), Stanford
-------
∂14-Apr-86 1817 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 14 Apr 86 18:15:16 PST
Date: Mon 14 Apr 86 17:22:52-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
Will be sent out on Wednesday. It is larger than the first issue
so be warned.
Emma Pease
ps. Like the first issue it will be sent out in parts (probably 8).
-------
∂16-Apr-86 1813 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 18:00:52 PST
Date: Wed 16 Apr 86 16:19:54-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 1
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
C S L I M O N T H L Y
------------------------------------------------------------------------
April 15, 1986 Stanford Vol. 1, No. 2
------------------------------------------------------------------------
A monthly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
---------------------
CONTENTS
Halfway Between Language and Information: The Role of Representation
at CSLI by Brian Smith --Part 1
Report from Japan by Jon Barwise --Part 2
Project Reports --Parts 3,4,5,6
Representation and Reasoning (R&R) --Part 3
Situated Automata Theory (SA) --Part 4
Discourse, Intention, and Action (DIA) --Part 4
Foundations of Grammar (FOG) --Part 5
Head-Driven Phrase Structure Grammar (HPSG) --Part 5
Computational Models of Spoken Language (CMOSL) --Part 6
New Interdisciplinary Undergraduate Major --Part 6
CSLI Postdoctoral Fellows --Part 6
CSLI Snapshots --Part 6
CSLI Visiting Scholars --Part 7
New CSLI Publications --Part 7
Letters to the Editor --Part 7
---------------------
HALFWAY BETWEEN LANGUAGE AND INFORMATION:
THE ROLE OF REPRESENTATION AT CSLI
Brian C. Smith
If you look back to the original CSLI research program, you can
find a tension between two important themes.
On the one hand, there was a *semantic* orientation -- a concern
with connection to the world. Tremendous emphasis, for example, was
(and still is) placed on the notion of information content, viewed as
arising from correlations among situations or states of affairs in a
constrained, regular world. This general insight, emphasized by
Dretske, has led people to develop theories of meaning that apply to
smoke's meaning fire, as well as to sentences meaning, say,
propositions. A focus on a semantic notion of information clearly
characterizes much CSLI work, including situation semantics, situation
theory, situated automata, and various other projects. It also
underlies some of the criticisms that have been voiced around CSLI to
the "purely formal" methodological stance towards mind, computation,
etc.
On the other hand, there has also been a strong *linguistic* theme.
A deep concern about the nature of language and linguistic structures
permeates the early proposals, and continues in a great deal of our
current work. Furthermore, there is more to language than
information, in the sense that language is a more specific phenomenon.
Linguistic structures can of course be used to carry or convey
information, but, as the smoke example illustrates, they are not
unique in this respect. Rather, languages possess a cluster of
additional properties: they are used for communication among agents;
they typically have an inductively specified compositional structure;
they seem to make explicit reference to concepts or types; they have
sophisticated mechanisms for exploiting circumstantial relativity;
etc. Some people view the communicative function as primary; others
highlight the compositional. There may be no fact of the matter to
settle such debates, since human language is almost certainly a
mixture of several intersecting properties. Nonetheless,
language-like structures clearly appear in a variety of places: in
programs and other computer languages; in formal theories generally;
and so on.
These two themes are directly reflected in CSLI's name. And the
relations between them are a subject of constant interest. The
tension alluded to above, however, has only recently begun to receive
explicit attention. It stems from two facts: (a) the ubiquity of
information, and (b) the specificity of language. For a variety of
reasons spelled out below, a notion is needed that is narrower and
more restrictive than information in general, but at the same time is
broader than language. This, at least as I see it, is the function
that representation might serve.
To get at it, note that the information theories of the sort suggested
by Dretske, Barwise and Perry, Rosenschein, and others, claim that
there is a tremendous amount of information around. In fact, if they
are right, the world is quite drenched in the stuff. The
counter-intuitive aspect of this claim has been explained by saying
that, in order to process or reason about information, an agent must
"know" or be "attuned" to the regularity it is based on. I.e.,
whereas a telephone cable can carry all sorts of information, we
wouldn't say that a cable processes the information it carries. A
person, however, or perhaps a computer or information processor, can
not only carry information, but can also process it, by being attuned
to its underlying regularities, or recognizing it as such. The
problem is that this notion of attunement hasn't yet been adequately
spelled out.
One place to search for an explanation of attunement, and of
information processing in general, is to look back to language. But,
especially if you take the communicative nature of language seriously,
that just seems wrong: language is a very particular phenomenon, that
has presumably developed to play a communicative function among agents
of a certain sort. Rather, what seems to be needed is a more general
notion, that would include language as a specific variety, but that
would encompass within its scope a much wider range of possibilities.
Representation seems the ideal candidate. For one thing,
non-linguistic representations are familiar: mathematical quadruples
to represent Turing machines; numbers and other mathematical
structures to represent distances, physical magnitudes (literally,
ratios of magnitudes), and scientific phenomena in general; sets of
sets to represent numbers; etc. And then there are photographs,
blueprints, musical scores, gestures, maps, balsa models, equations,
recipes, icons, ... the list goes on and on. Even Dreyfus admits that
minds, too, are representational, in a general sense, since we all
clearly represent the world around us in a thousand ways. And yet at
the same time it seems right to say that language, at least in some of
its functions, is a species or kind of representation.
It is not my intent here to try to say what representation is; that
is the job of (among others) the Representation and Reasoning project
(see report in this issue). Rather, my goal is only to place it on
the table as a subject deserving its own intellectual inquiry.
Furthermore, it is a subject that affects us all, as even a quick
survey demonstrates:
1. Linguistics
Linguists, in accounting for the regularities of language, use all
sorts of representational structures: parse trees, feature analyses,
information-structures such as LFG's C- and F-structures, phonetics
representations such as those discussed in the CMOSL report in this
issue, grammars and grammatical formalisms, etc. Although these
structures aren't full-fledged languages, it is being increasingly
realized that they are definitely representational, and as such
deserve their own semantical analysis. See for example work by
Shieber and Pereira on semantics for grammatical formalisms, the FOG
and HPSG reports in this issue, and Stucky's recent paper on
"Interpreted Syntax".
2. Logic
It is distinctive of the model-theoretic approach to semantics to use
mathematical structures to model the interpretation or content of
sentential and other linguistic forms. Etchemendy and Barwise, in a
seminar to be presented later this month, will analyze this tradition
from a representational point of view. Although it would be odd to
call a model linguistic, it seems quite appropriate to take "model" to
be a species of representation, which enables one to ask about the
further semantical relation between the model and the world being
modelled.
3. Artificial Intelligence
Although knowledge representation is recognized in AI as a central
issue, most explicit theorizing has been about the nature and
structure of knowledge, not representation. Nonetheless, the
representational aspect of the project seems equally important,
especially for computational purposes. A theory of representation
might allow otherwise unanalyzed proposals to be assessed and
compared. For example, much of the difference between logical and
frame or network systems has to do with important differences in
representation that aren't captured in model-theoretic content.
Similarly, we should be able to reconstruct persistent intuitions
about analog, propositional, and imagistic representations.
4. Computer Science
Computational practice is full of representational structures: data
structures, data bases, programs, specifications, declarations, etc.
Several years ago some of us argued for a linguistic analysis of such
constructs, but again -- especially with the hindsight obtained by
working with real linguists -- this seems too specific. Furthermore,
there are competing traditions within computer science; the abstract
data type tradition, for example, argues explicitly against a
linguistic analysis of computational processes, instead classifying
their structure in terms of mathematical models. But however it is
analyzed, there is no doubt that our current systems embody a wealth
of representational relations. Consider a text editor like EMACS or
TEdit, for example: there is the text being edited, the presentation
on the screen, the internal "text" stream or document representation,
the internal program that describes the editor, the "language" of
interaction made of keystrokes and mouse-buttons, etc. This domain
provides a rich source of examples and tests for any proposed theories
of representation.
5. Philosophy
It has been a pervasive intuition in the philosophy of mind that
mental operations must in some way be representational. Even on a
relatively narrow conception of that notion, such writers as Block,
Dennett, Fodor, Newell, Pylyshyn, Rey, and Stich will sign up. As the
notion broadens, other theorists will start to agree -- Perry and
Barwise, for example, even Dreyfus. Rather than taking the
"representational" thesis as a binary yes/no question, a more
sophisticated theory of representation might allow finer distinctions
among these writers to be articulated and explained.
In closing, it is worth pointing out a distinctive aspect of CSLI's
particular approach to representation. Several years ago, we were
very careful to insist that "information" was a semantic, not a
formal, notion -- as opposed to the way it was treated, for example,
in Shannon's theory of information capacity. I.e., rather than
rejecting information, the proposal was to reconstrue it in a way that
got more directly at its essential properties. My suggestion is that
we apply the same medicine to received views of representation. For
example, many readers of Barwise and Perry's "Situations and
Attitudes" found the authors to take an anti-representational view of
mind. In retrospect, it seems as if this bias arose from a failure to
discriminate between *formal* theories of representation, and
representation as a more general, semantic, phenomenon. What we need
is a theory of representation that can do justice to this fuller
notion.
Various projects are already working towards this goal. For
example, the Situated Automata project (see report in this issue) can
be viewed as an exploration of (a) how much information processing can
be embodied in a non-representational agent, and (b) what kinds of
representation, other than of the formal, linguistic variety, are
useful in developing agents able to successfully cope with their
embedding circumstances. Similarly, in my own work on the foundations
of computation, I am attempting to erect a theory of computation on a
representational but non-formal base. By freeing representation from
its purely formal heritage, there is at least a chance that we will
uncover a notion that can resolve the initial tension, and
successfully occupy a middle ground between language and information.
---------------------
-------
∂16-Apr-86 1911 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 19:07:06 PST
Date: Wed 16 Apr 86 16:20:41-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 2
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
REPORT FROM JAPAN
Jon Barwise
I have just returned from a fascinating two weeks in Japan. One
week was spent at an International Symposium on Language and
Artificial Intelligence. The other was spent partly in sightseeing,
in giving lectures in Tokyo, and in visiting the Natural Language
Processing Group at ICOT, hosted by Dr. Kuniaki Mukai, leader of that
group.
The symposium was the first event completely sponsored by the new
International Institute for Advanced Studies, to be part of the new
Science City, in Kyoto Prefecture, near Kyoto. The meeting was
planned by a committee chaired by Prof. M. Nagao, from Kyoto
University. It consisted of four busy days of closed meetings,
followed by an afternoon public session. There were 20 invited
speakers, 10 from Japan, and 10 from other countries, including
Barbara Grosz and myself from CSLI. (Since this is a report about
work in Japan, I will limit my discussion below to a few of the talks
given by the Japanese speakers.) In addition, there were 10 more
invited participants and 20 observers, all from Japan. My rough count
of the public session put the number at around 1200, with people
traveling hours from all over Japan to attend.
Both the meeting and the astounding attendance at the public
session show the keen interest in Japan in the area of natural
language and AI. My sense is that this interest stems from three
sources. One is just the natural fascination of language to the
Japanese people, for reasons anchored in the history and structure of
their own language. A second is the problems faced by the Japanese in
terms of communication with the rest of the world. While they are one
of the world powers, economically, they speak a language that no one
else in the world uses. Thus basically everything written needs to be
translated either into Japanese or from Japanese into a host of other
languages. Finally, there is the Japanese determination to stay at
the forefront of research and productivity in computer science.
A number of things struck me in the invited addresses given by the
Japanese participants, as well as in discussions in Kyoto and Tokyo.
One was that they are much more aware of the overriding importance of
circumstantial facts of discourse and context in the interpretation of
utterances than the typical researcher in the US (outside the Bay
Area, of course). For example, the overriding importance of context
was emphasized in a very interesting paper on future prospects in
machine translation, by Prof. J. Tsuji-i, of the Electrical
Engineering Department of Kyoto University. Partly this concern comes
from the heavy use of "elipsis" in Japanese, partly from the role
played by honorifics and by tags that indicate the speaker's attitude
toward his own "assertion". Similarly, several talks stressed the
role of emotional factors in interpretation in Japanese. (Prof. Y.
Anzai gave a talk called "Towards emotional architecture for natural
language processing", but he observed that his title was partly
intended to be a pun.) I suspect that the importance of context in
Japanese is one of the things that attracts many Japanese students of
language to situation semantics, since it is a theory where context
plays a much bigger and more flexible role than traditional semantic
accounts.
Another thing that struck me was the extent to which Japanese
researchers are on top of theoretical developments from CSLI: LFG,
GPSG, PATR, and situation semantics, in particular. For example,
Prof. T. Gunji, from Osaka University, also head of an ICOT working
group, gave a very interesting paper taking off from Carl Pollard's
Head Grammar, generalizing it to allow "Subcat" to take a partially
ordered set of features in a way that gave a very elegant treatment of
a number of puzzles in Japanese syntax and semantics. This work is
part of the JPSG grammar of Japanese being developed at ICOT.
Similarly, the group at ICOT has worked through the latest things on
situation semantics, even things that are not yet finished. Barbara
Grosz and I were also struck with how important various problems that
we have been wrestling with here at CSLI are in a wide range of
applications, especially issues in representation -- mental and
otherwise. Unfortunately, this is one area where progress made here
has not been written up in any one place, so the ideas have not
reached the outside yet.
The public session was hosted by the chairman of the planning
committee, Prof. H. Yoshizawa. He stressed the importance of
theoretical, basic research, and that, while the new Institute is
being supported by business and industry, it is to be devoted to such
basic research. It is to have no fixed staff or subject matter, but
will support various theoretic projects, for different periods of
time. Language and AI is one of the areas they currently plan to
support. If the planning and execution of the symposium are any
indication, they will do a splendid job.
In Tokyo, I lectured to the Logic and Linguistics Society of Japan,
a thriving group with several hundred members throughout Japan. About
100 or so came to my talk. I noted there that there is no similar
organization in this country, or the world, as far as I know. It
seems to be a very happy collaboration. However, there are apparently
very few philosophers interested in natural language in Japan. While
some people in this country might wish that there were fewer
philosophers in the field, in Japan this shortage is seen as a
problem.
My talk to this group was on recent developments in situation
semantics. I borrowed freely from Gunji's paper and from LFG notation
to give the semantics of a small fragment of Japanese, one where I
tried to indicate the interaction of the discourse, grammatical,
background, and described situation. Before going to Japan, I had
been warned that Japanese audiences consider it impolite to ask
questions. So contrary to what I expected, there was a lot of
discussion, both here and at the symposium, and the discussions were
very productive.
My final full day in Japan was spent at ICOT. There I heard reports
from the members of the Natural Language Processing Group on JPSG,
situation semantics, and the aims of their particular project. I was
struck by how theoretical it was, contrary to a widespread view of
what is going on at ICOT. They see their project as basically a
bridge between theory and the academic world, on the one hand, and
implementation and the industrial world, on the other, a bridge that
allows a two-way-flow of ideas. In this regard, it is much more
similar to CSLI (though much smaller) than I had expected. I also
discovered that some of the ideas currently being developed in the
STASS group (for example, Susan Stucky's view of interpreted syntax)
are implicit in some of Dr. Mukai's work.
I had lunch with members of this group and Dr. K. Fuchi, the
director of ICOT. I found his view of ICOT much the same as that
depicted by the NLP group earlier. I also found him very interested
in cooperation with scientists from CSLI. I don't know if there has
been a shift in perspective at ICOT, or if the hysteria in the US over
the 5th generation work has given us a warped perspective, but again I
sensed much more in common between their perspective and ours here at
CSLI than I had foreseen. Both at ICOT and at the symposium I found
the researchers keenly aware of the deep theoretical difficulties that
lie ahead, and so much more interested in long-term basic research
than I had been led to expect.
In the afternoon, I saw a demo of a natural language discourse
system, DUALS, on the new PSI machine, with a frank appraisal of the
strengths and weaknesses of the DUALS system. This was followed by a
two-and-a- half-hour discussion session with about 30 people from
ICOT, and from industrial and academic institutions around Tokyo, who
had prepared detailed questions about situation semantics. Again, I
found them very well informed and far from reticent about asking tough
questions. It was a very thought-provoking afternoon.
In fact, it was a very thought provoking trip in many ways. One
thing that people told me repeatedly was how much they envied people
at CSLI for being able to interact across disciplines and institutions
so easily. Many of them would love to spend some time here. Of
course, the interactions here may not be as easy for us as they
imagine, but it certainly is much easier than for them. They have all
our problems, and more. For example, to get together in Tokyo, people
usually commute for an hour or two in each direction, in addition to a
similar commute to and from home. The same problem was mentioned in
Kyoto. Also, while we think we are short of space, it is nothing
compared to the space situation there. All in all, I came away with a
real admiration for the work the Japanese are doing, but also with a
fuller appreciation of the CSLI environment.
finally, i would add that my hosts in japan could not have been
more thoughtful. not only was the trip very productive for me,
scientifically, it was also thoroughly enjoyable. i returned with a
richer sense of the international character and importance of the
research we are engaged in here at csli.
---------------------
-------
∂16-Apr-86 2047 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 3
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 20:28:32 PST
Date: Wed 16 Apr 86 16:26:45-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 3
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PROJECT REPORTS
REPRESENTATION AND REASONING (R&R)
Brian C. Smith
Project Participants: Ivan Blair, Carol Cleland, John Etchemendy,
David Levy, Ken Olson, Brian Smith (Project
Leader), Lucy Suchman, Terry Winograd
So far, the Representation and Reasoning project has only been
responsible to the first half of its name. Our present aim is to
develop a comprehensive theory of representation and modeling, able to
explain the pervasive role these notions play in language,
computation, mind, and general information processing. But even this
is far too large a goal to tackle directly, so within that general
framework we've identified four slightly more manageable projects:
1. Developing a typology of the various kinds of "correspondence"
relations that can hold between A and B, when A represents B
2. Analyzing the philosophical foundations of representation --
particularly of *acts* of representation ("represent",
after all, is a verb)
3. Examining the notion of models and modeling (viewed as a
species of representation), with specific reference to their
use in the model-theoretic approach to semantics
4. Constructing a new theory of computation and information
processing, based on a foundation of embodied representational
processes
This report will focus on only the first two of these, since they have
received the bulk of our attention.
1. Categories of Correspondence
Consider photographs, sentences, balsa airplane models, computer screens,
Turing machine quadruples, architectural blueprints, set-theoretic models
of meaning and content, maps, parse trees in linguistics, and so on and
so forth. Each is a representation -- a complex, structured object --
that somehow stands for, or corresponds to, some other object or
situation (or, if you prefer, is "taken by an interpreter* to stand for,
or correspond to, that represented situation -- see below). Our first
task, in trying to make sense of this wide variety of representation
relations, has been to identify the ways in which the structure or
composition of a representation can be used to signify or indicate what
it represents.
It is striking that received theoretical practice has no vocabulary
for such relations. On the contrary, standard approaches generally
fall into one of two camps: those (like model-theory, abstract data
types, and category theory) that identify two objects when they are
roughly isomorphic, and those (like formal semantics) that take the
"designation" relation -- presumably a specific kind of representation
-- to be strictly non-transitive. The latter view is manifested, for
example, in the strict hierarchies of meta-languages, the notion of a
"use/mention" confusion, etc. Unfortunately, the first of these
approaches is too coarse-grained for our purposes, ignoring many
representational details important for computation and comprehension,
while the latter is untenably rigid -- far too strict to cope with
representational practice. A photographic copy of a photograph of a
sailboat, for example, can sometimes serve perfectly well as a photo
of the sailboat. Similarly, it would be pedantic to deny, on the
grounds of use/mention hygiene, that the visual representation `12' on
a computer screen must not be taken to represent a number, but rather
must be viewed as representing a data structure that in turn
represents a number. And yet there are clearly times when the latter
reading is to be preferred. In practice, representational relations,
from the simplest to the most complex, can sometimes be composed,
sometimes not. How does this all work?
Our approach has been to start very simply, and to identify the
structural relations that obtain between two domains, when objects of
one are used to correspond to objects of the other. For example, we
call a representation "iconic" when its objects, properties, and
relations correspond, respectively, to objects, properties, and
relations in the represented domain. Similarly, a representation is
said to "absorb" anything that represents itself. Thus the grammar
rule, EXP -> OP(EXP1,EXP2), for a formal language of arithmetic,
absorbs left-to-right adjacency; model-theoretic accounts of truth
typically absorb negation; etc. A representation is said to "reify"
any property or relation that it represents with an object. Thus
first-order logic reifies the predicates in the semantic domain, since
they are represented by (instances of) objects -- i.e., predicate
letters -- in the representation. A representation is called "polar"
when it represents a presence by an absence, or vice versa, as for
example when the presence of a room key at the hotel desk is taken to
signify the client's absence. By developing and extending a typology
of this sort, we intend to categorize representation relations of a
wide variety, and to understand their use in inference, their
composition, etc.
Even this much discussion suggests how wide a variety of examples
are relevant to this analysis of correspondence, but we have found the
domain of visual representations on computers to be a particularly
rich source of both insights and constraints. In simple cases of
computerized editing, for example, a user must understand the
relations among a whole host of representational structures: visual
figures on the screen, document representations in the computer
memory, printed presentations of documents, documents themselves
(whatever they are), "languages" of interaction (mouse clicks,
keyboard commands, etc.), visual annotations representing formatting
parameters of the document (TEX commands, style sheets, etc.). In
developing our theory of correspondence, therefore, we are working
closely with the Analysis of Graphical Representation project. Our
connections with other CSLI groups are also strong, particularly with
the STASS, Embedded Computation, and Situated Automata groups, each of
which is wrestling with the role of representation in computation,
information processing, and inference. In part we view these other
groups as potential "customers" for any theories we develop.
2. The Foundations of Representation
There is clearly more to representation than correspondence. For one
thing, representation, at least in general, is apparently asymmetric,
whereas correspondence -- especially when viewed in the very general
way suggested above -- would seem to be a symmetric relation.
Secondly, representation seems to require some sort of causal,
intentional, or at least counterfactual supporting connection, whereas
two structures might end up in a correspondence relation for purely
accidental reasons. Finally, there is surely *too much*
correspondence in the world (such as, famously, between the suicide in
France rate and the price of bananas in the 1920s). While
representation may often involve correspondence, it also involves a
great deal more, and seems therefore to be a much rarer commodity.
In assessing the foundations of representation, we have been drawn
into a variety of metaphysical and methodological concerns, and have
been motivated to look at writers ranging from Goodman to Peirce to
Brower to Bohr, as well as those within the standard "linguistic"
traditions. While it is too early to report on any of these
intellectual forays, the group does seem to agree on at least
something like the following program:
To start with, we have come to use the term "registration" for the
process whereby an agent "parses" the world, thereby carving it into
objects, properties, substances, relations, whatever. The group is by
no means agreed on such metaphysical issues as realism, anti-realism,
etc. (i.e., on whether the world comes registered in advance, whether
the constraints on registration are solely the individual's or
community's concern, or whether the process is one of negotiation
between the agent, community, and embedding situation). Nonetheless,
there does seem to be some agreement, at least in terms of conceptual
foundations, that the following three subjects must be studied
together: acts of representation, acts of interpretation (i.e., where
an intentional agent takes A to represent B), and acts of
registration. Furthermore, it is also clear, in many paradigmatic
cases of representation, that the connection between representation
and represented, whatever it is, need not be one of nomic coupling.
On the contrary, most if not all representational relations are
characterized by a certain degree of "disconnection", so that how the
world is, and how the representation represents it as being, need not
be the same.
In investigating these issues we have discussed such examples as:
1. Spinach reacting to salt water (which seems to involve no
representation, registration, or interpretation)
2. The Count of Monte Cristo's recognizing that he has been thrown
into the sea (which seems at least to involve registration;
there is room for debate on the other two)
3. Helen Keller's saying "Water!", when her hands were held under
the faucet, and the word was repeatedly pronounced for her
4. A computer system's internal structures or states that
correspond to the presence of water, as for example in a
computerized fluoridation plant
5. The occurrence on a map of an icon or symbol representing water
Needless to say, we have not yet produced anything like a coherent
story that can deal with all these issues. We do believe, however,
that they get at questions relevant throughout CSLI research. Two
clear examples are (a) the notion of "attunement" to, or "recognition"
of, a constraint, as used by Barwise and Perry to explain how
information can be processed; and (b) the representational foundations
of computation, as being explored in the Embedded Computation project.
-------
∂16-Apr-86 2142 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 21:31:45 PST
Date: Wed 16 Apr 86 16:27:55-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
SITUATED AUTOMATA THEORY (SA)
Stan Rosenschein
Project Participants: Todd Davies, Doug Edwards, Haim Gaifman, Leslie
Kaelbling, Fernando Pereira, Stan Rosenschein
(Project Leader)
The main goal of the Situated Automata project is to investigate
informational properties of computational systems embedded in larger
physical or computational environments. Our long-term activities fall
into three areas: (1) developing a mathematical theory of how the
structure of a machine and its coupling to the environment determine
its informational characteristics, (2) developing formal design
methods for building systems with complex informational properties,
and (3) applying the design methodology to robots, natural language
systems, and other computational systems that perceive conditions in
their environment and act intelligently on it.
In the situated-automata approach, information content is analyzed
in terms of correlations of state between a system and its environment
over time. A machine (or machine component) x is operationally
defined to carry the propositional information that p in a given state
v if its being in state v is systematically correlated with p's being
true in the world. This definition provides a concrete model for
well-known epistemic logics and can be directly applied to actual
computer systems. Much of the work of the project has to do with how
different conceptions of correlation, machine, and proposition give
rise to different perspectives on the analysis and synthesis problems.
During the past few months we have been concentrating our efforts
on two major areas: machines with composite structure and
informational aspects of perception. Each of these will be described
in turn.
1. Extending situated automata theory to machines with composite
structure
In practice, complex machines are constructed of many connected
components which can be modeled at several levels of abstraction. One
important research problem is to understand the localization of
information within components of such a machine. Stan Rosenschein is
attempting to analyze machine inference in terms of information flow
among components of a complex machine, using the correlational model
of information as the basis for the analysis.
As a practical outgrowth of some of this work, Leslie Kaelbling and
Stan Rosenschein have been designing a language, Rex, in which complex
machines can be specified in a way that allows compositional reasoning
about the propositional content of the machine states without assuming
a conventional "language of thought" approach, i.e., the approach
which views an agent's mental state as consisting of representations
having the structure of interpreted linguistic expressions.
Experiments are currently under way that use this language to control
Flakey, SRI International's robot. In addition, Leslie has been
exploring how issues of cognitive architecture (e.g., the
modularization of perception and action, hierarchical control, and
planning) are most naturally formulated in the situated-automata
framework.
At a more abstract level, Fernando Pereira, in joint work with Luis
Monteira, has been using the theory of sheaves to model located
concurrent processes and their information content using the
conceptual tools of abstract algebra.
2. Developing a rigorous informational theory of machine perception
Perception is a challenging test case for situated automata theory.
In trying to analyze information content at the lowest level of
perception, we were reluctantly led to the conclusion that low-level
perceptual information is largely statistical in character. This
caused us to explore a probabilistic version of the theory, which we
probably would not have done in the absence of the robot-perception
application. While the fundamental ideas of situated automata theory
are easily carried over to the probabilistic domain, this move
introduces considerable complexity into the design of practical
systems.
The difficulty stems from the difference in how "correlation" is
operationalized in the probabilistic vs. non-probabilistic versions of
the theory. In the non-probabilistic case, correlation is modeled
using implication: element x carries the information that p (written
K(x,p)) if x's being in its current state implies that p is currently
true. Because of the nature of implication, this version of the
theory satisfies a strong "spatial" monotonicity property: K(x,p) &
K(y,q) --> K([x,y],p&q). This allows us to describe the information
carried by structured objects [x,y] dire c tly in terms of the
information carried by their components x and y, leading naturally to
hierarchically structured designs that can be reasoned about
compositionally.
Unfortunately, in the probabilistic case, where correlation of
conditions is most naturally operationalized in terms of conditional
probabilities, spatial monotonicity fails. The conditional probability
of p&q given the joint state of x and y bears only a weak relationship
to the probabilities of p and q given the states of x and y
individually. This rules out naive approaches to hierarchical design,
and we are therefore exploring constrained design disciplines that
will allow us to reason hierarchically about information content while
retaining an underlying probabilistic definition of information.
In related work, we are developing a logic of perceptual information
to serve as the metalanguage for specifying the information content of
states of the perception module. In this language the designer can
make assertions about probabilities of certain physical conditions
holding of the environment given internal states of the robot and vice
versa. The goal is to be able to specify physical conditions in a
sublanguage expressive enough to describe the world of everyday
experience and precise enough to allow rigorous reasoning about their
relation to machine states.
The perception work has begun only very recently and is being carried
out by a working group consisting of all the project members, with
Haim Gaifman playing a particularly active role on the logic of
perceptual information.
DISCOURSE, INTENTION, AND ACTION (DIA)
Phil Cohen, Doug Appelt, and Amichai Kronfeld
Project Participants: Doug Appelt, Herb Clark, Phil Cohen (Project
Leader), Barbara Grosz, Jerry Hobbs, Amichai
Kronfeld, Ray Perrault, John Perry, Martha
Pollack, Heather Stark, Susan Stucky, Deanna
Wilkes-Gibbs, Dietmar Zaefferer
This quarter, the Discourse, Intention, and Action group
concentrated on the relationship of theories of rational interaction
to theories of illocutionary acts, and to theories of referring. They
discussed in detail proposals by Phil Cohen (in collaboration with
Hector Levesque from the Department of Computer Science, University of
Toronto), Doug Appelt, and Amichai Kronfeld.
Cohen and Levesque's work shows how many illocutionary acts can be
defined in terms of rational interaction. They argue that
illocutionary acts are "attempts", actions done with certain beliefs
and goals/intentions. The speaker need not achieve the intended
effects directly, but may achieve them mediated by a chain of
entailment, the elements of which are justified by the theory of
rational interaction.
Cohen and Levesque's theory of illocutionary acts has three
components:
1. A theory of rational interaction that shows how agents' beliefs,
goals, intentions, and actions are related, both within and
across agents
2. A (simplistic) theory of sincerity (in which sincere agents
do not try to bring about false beliefs in other agents)
3. A characterization of the effects of uttering sentences with
certain "features" (a la Grice), such as a given syntactic mood
With these three sub-theories, they show how Searle's felicity
conditions (preparatory, sincerity, essential/illocutionary point,
etc.) can be derived from the initial characterization of uttering
sentences in a given syntactic mood. Moreover, the expected success
of performative uses of various illocutionary verbs can be derived.
Here, they basically follow a Bach/Harnish analysis of performatives
as indicative mood utterances, but treat such utterances as stating
that the very utterance event is one characterized by the mentioned
illocutionary verb. Hence, since the illocutionary verb names an
"attempt", the speaker only had to have the right beliefs and goals.
To the extent that Cohen and Levesque's analysis is on the mark,
the subject of illocutionary acts is in some sense less interesting
than it has been made out to be. That is, the interest should be in
the nature of rational interaction and in the kinds of reasoning
(especially nonmonotonic) that agents use to plan and to recognize the
plans of others. Many illocutionary acts are derived from such a
pattern of reasoning, and constraints on their use in conversation
follow from the underlying principles of rationality, not from a list
of sequencing constraints (e.g., adjacency pairs).
At the level of rational interaction, Cohen and Levesque argue that
the concept of intention is composite (or molecular) -- agents are
both directed (at something) and persistent. Persistent goals are
ones the agents will keep (and in most cases, try to achieve), even
after numerous failures. Agents can only give up their persistent
goals under certain circumstances. Minimally, such goals can be given
up only if they are achieved or believed to be impossible. The notion
of persistence is particularly useful in that it shows why agents need
not intend all of the expected consequences of their intentions.
Simply, they are not persistent with respect to expected side effects.
A useful extension of the concept of persistent goal is the
expansion of the conditions under which an agent can give up his/her
goal. When necessary conditions for an agent's dropping a goal
include his/her having other persistent goals (call them
"supergoals"), the agent can generate a chain of goals such that if
the supergoals are given up, so may the subgoals. If the conditions
necessary for an agent's giving up a persistent goal include his/her
believing some other agent has a persistent goal, a chain of
interlinked goals is created. For example, if Mary requests Sam to do
something and Sam agrees, Sam's goal should be persistent unless he
finds out Mary no longer wants him to do the requested action (or, in
the usual way, he has done the action or finds it to be impossible).
Both requests and promises are analyzed in terms of such
"interpersonally relativized" persistent goals.
It was pointed out by Perrault and Clark that the theory proposes
an implausible effect to the uttering of ironic utterances. For
example, in analyzing the imperative "Jump in the lake", the theory
(initially) proposes that after uttering it, the hearer thinks it is
mutually believed that the speaker wants it to appear that (formalized
as "the speaker wants the hearer to think/believe that") he wants the
hearer to jump in the lake. At this point, one could reason that the
speaker wants the hearer to believe something that both parties
mutually know the speaker does not want the hearer to know.
Derivation of a true request would be blocked here, as desired.
However, the counter argument is that even that weak effect should not
hold for ironic imperatives. The problem, it is argued, is
symptomatic of the need for nonmonotonic inference. Cohen and
Levesque agreed, and lacking a theory of nonmonotonic inference for
modal logics, they may substitute inferences employing negated modal
operators (where various conditions are stated in the form
~Mutual-Belief ~p). Whether this will be adequate remains to be
investigated. In future meetings, Perrault will present the beginning
of a nonmonotonic theory of speech acts.
Cohen and Levesque are (still) writing a paper presenting a formal
theory of these concepts entitled "Communication as Rational
Interaction". It should be available soon.
Doug Appelt and Amichai Kronfeld have been developing a theory of
referring as rational action. They have developed a theory of beliefs
about material objects in which it is possible to represent the
aspects of an agent's beliefs that are relevant to referring.
According to this theory, agents acquire representations of physical
objects through actions of perception and communication, and they
describe how beliefs about what these representational objects denote
change over time. When a speaker utters a referring expression, s/he
intends the hearer to invoke some set of these representational
objects, all of which denote the same thing. This action is currently
called "concept activation". As part of his/her communicative
intentions, the speaker places conditions on what kinds of
representational objects this active set should contain.
One important application of this model is the ability to represent
"identification conditions". Appelt and Kronfeld view the intention
that the hearer identify the referent as placing certain constraints
on the hearer's activated concept. If the hearer's active concept
satisfies the speaker's identification conditions, s/he is said to
have "identified" the referent of the speaker's description. Clearly,
different identification conditions are relevant in different
contexts. One identification condition might be that the active
concept contain a perceptual representation of the object referred to.
This is about as close to absolute identifcation as one can come
within this theory. A much weaker identification condition is that
the active concept contain a representational object resulting from a
previous communicative act, which amounts to the simplest case of
coreference resolution. For example, a perceptual identification is
necessary to carry out a request that involves physical manipulation
of the object referred to. Thus, "Replace the 200 ohm resistor"
requires perceptual identification, but "Tell me the voltage drop
across the 200 ohm resistor" requires identification only if the
voltage is to be measured by connecting a voltmeter to the circuit.
If the hearer is providing his/her answer from his/her general
knowledge about the circuit that is being repaired, s/he could answer
without perceiving the referent at all. Several other possibilities
are under study. What is important is that the identification
conditions follow from the recognition of the speaker's intentions
about what the hearer is to do or to believe. It is not necessary to
hypothesize any explicit act of identification as part of the meaning
of a referring expression. We have been able to construct an example
that illustrates how a plan for perceptual identification is
formulated by a hearer who understands a speaker's request to
manipulate a physical object.
Future sessions of DIA will include discussions of the relationship
of nonmonotonic reasoning to illocutionary acts and plan-recognition,
and the relationship of intentions and illocutionary acts to
discourse.
-------
∂16-Apr-86 2251 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 22:47:26 PST
Date: Wed 16 Apr 86 16:29:25-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FOUNDATIONS OF GRAMMAR (FOG)
Lauri Karttunen and Stuart Shieber
Project Participants: Roland Hausser, Mark Johnson, Ron Kaplan, Lauri
Karttunen (Project Leader), Martin Kay,
Fernando Pereira, Carl Pollard, Ivan Sag,
Stuart Shieber, Hans Uszkoreit, Tom Wasow,
Dietmar Zaefferer
General Issues
The Foundations of Grammar project has been concerned overall with
elucidating the various foundational bases of grammar. These include
the *mathematical*, *computational*, and *empirical* foundations of
grammar. We have been particularly concerned with those grammar
formalisms in prevalent use at CSLI, which might go under the term
"Unification Grammars" (UGs). In grammars of this type, syntactic
rules and lexical entries can be expressed as sets of attribute-value
pairs. The value of an attribute can itself be a set of attributes
and values and, because the value at the end of one path of attributes
can be shared by another path, the structures that these grammars
generate can be thought of as directed graphs. Unification is the key
operation for constructing such graphs.
During this past fall and winter quarters, the FOG group has joined
forces with the HPSG project in holding weekly meetings to discuss
issues of common interest. A continuing theme in our meetings has
been the comparison and synthesis of various of the UGs, culminating
in a joint paper (with HPSG) presented at the West Coast Conference on
Formal Linguistics in which our collective view of the common
foundations of the UGs was put forward. As will be seen below, the
meetings have been devoted, as well, to the comparative study of a
variety of foundational and practical issues pertinent to the
unification-based formalisms.
Mathematical Foundations of Grammar
The focus of the mathematical effort is to develop a good general
account of the semantics of grammar formalisms. The empirical
predictions and the mathematical and computational properties of a
linguistic theory depend crucially on the form of the rules and the
conventions on their interpretation. In this vein, Shieber presented
recent work drawing an analogy between the semantics for grammar
formalisms and the type-theoretic semantics for programming languages
seen in work in computer science. This view of "parsing as type
inference", extending earlier work on denotational semantics for
grammar formalisms by Pereira and Shieber, and the parsing as
deduction view espoused by Pereira, produces a rich metaphor with
ramifications in the areas of formalisms for linguistics and
programming language design.
The FOG Colloquium speaker, William Rounds, has also recently
undertaken work in the semantic foundations of unification-based
systems. He presented his semantics based on a new logical calculus
for linguistic feature structures at the colloquium and worked with
various members of the project on such semantic issues.
Other work of the group centered on the mathematical properties of
extensions to simple UGs. For instance, the issue of monotonicity of
formal constructs is an interesting and difficult one, raising not
only mathematical questions, but closely related computational and
empirical questions as well. Because unification is associative and
commutative, in a pure unification-based grammar all statements are
order-independent and neutral with respect to parsing and generation.
This gives pure UGs a monotonic character. A series of meetings was
devoted to comparing the approaches of various theories to the problem
of nonmonotonicity, and to the actual linguistic motivation for such
nonmonotonic constructs.
Another difficulty with simple UGs, first pointed out by Mark
Johnson, is that simple versions of UGs cannot account for certain
types of constructions with multiple filler-gap dependencies. One
solution to this and a host of related problems was presented by
Ronald Kaplan in his talk on "functional uncertainty". The
mathematical and implementation issues related to Kaplan's idea are
currently being explored.
Computational Theory and Practice
The study of computational aspects of linguistic formalisms can be
pursued along two lines. First, there are questions of the abstract
computational characterization of the UGs. Some of the formal issues,
for example, the question of whether constructs in the formalisms are
monotonic, have significant ramifications in the area of computational
characterization.
Second, more practical questions of implementation of parsers or
generators for grammar formalisms are pertinent, both for their
intrinsic practical benefit, and for the insight that such efforts
provide into the more theoretical aspects of the grammatical
enterprise. A series of meetings was devoted to a discussion of the
current state of the craft of implementation of unification and
unification-based grammar formalisms. New implementations including
the Basic Linguistic Tools (BLT) and D-PATR (formerly known as HUG)
were described and compared.
The BLT project, headed by Martin Kay, has developed a set of tools
for the construction of parsers in the context of the object-oriented
programming environment called Loops. A complementary effort, to
provide the CSLI community with an efficient implementation of a
simple unification-based grammar formalism, called PATR, was
undertaken by Lauri Karttunen. A paper describing the system
("D-PATR: A development environment for unification-based grammars")
will appear shortly as a CSLI report. D-PATR is currently being used
for grammar development at SRI International and Xerox PARC. It has
also been distributed to researchers at several American and foreign
universities. D-PATR runs on Xerox 1100 series machines. A Common
Lisp implementation of PATR is being considered.
David Israel and Lauri Karttunen gave a report on new ways of
encoding semantic information using the PATR formalism. The topic of
the talk was the interpretation of complex noun phrases in the context
of situation theory. The report covered adjectives, prepositional
phrases, and relative clauses of considerable complexity, e.g., "the
company [the histogram of whose production she wants to display]".
Empirical Motivation
The entire set of issues raised by the FOG group rests upon the
actual requirements placed upon grammar formalisms by the grammatical
information they are intended to convey. Thus, the question of the
empirical motivation for various formal constructs is a crucial one.
As a prerequisite to determining the empirical motivation for a
construct, it is necessary to be able to distinguish it not only
"notationally", but "notionally" from its alternatives. Thus, much of
the effort in providing explicit semantics for formalisms can aid in
this effort as well, and FOG meetings have discussed such issues of
notational and notional comparison of formalisms. Meetings comparing
PATR and HPSG, and the nonmonotonic extensions of various theories are
specific examples. Especially in the latter case, the empirical
motivation for nonmonotonicity in its several guises was pursued,
whereas the distinctions among the various particular nonmonotonic
devices were shown to be more or less notational.
HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR (HPSG)
Ivan Sag and Carl Pollard
Project Participants: Lewis Creary, Mary Dalrymple, Elisa Finnie,
Lyn Friedman, Jeff Goldberg, David Gurr,
David Israel, Mark Johnson, Godehard Link,
John Nerbonne, Carl Pollard, Mats Rooth,
Ivan Sag (Project Leader), Peter Sells,
John Stonham, Tom Wasow, Leora Weitzman,
Dietmar Zaefferer, Annie Zaenen
During the winter quarter, the work of the Head-Driven Phrase
Structure Grammar project has proceeded in close consultation with
that of the related FOG project. This integration has been reflected
both by the decision to combine the regular meetings of the two groups
into a single weekly joint meeting, and by the presentation at the
recent West Coast Conference on Formal Linguistics of a paper entitled
"Unification and Grammatical Theory" jointly authored by members of
the FOG group and members of the HPSG group (Lauri Karttunen, Martin
Kay, Stuart Shieber, Carl Pollard, Ivan Sag, Ron Kaplan, and Annie
Zaenen).
Throughout the last nine months, the HPSG group has held a number
of research meetings addressing various issues pertaining to the
syntax-semantics interface, such as agreement phenomena in languages
with so-called "grammatical gender", whether to treat infinitives and
gerunds without overt subjects as sentences denoting propositions or
as verb phrases denoting properties, and differences between
constructional and quantifier phrase "binding".
The principal goal of the HPSG project has been to develop an
information-based theory of natural language syntactic and semantic
structure capable of integrating and synthesizing insights and results
produced by a variety of current syntactic-semantic approaches,
including Categorial Grammar, Generalized Phrase Structure Grammar,
Lexical-Functional Grammar, Situation Semantics, and Discourse
Representation Theory. In common with computational linguistic
formalisms such as Martin Kay's Functional Unification Grammar and SRI
International's PATR-II, HPSG describes linguistic structures
declaratively, in terms of an information domain consisting of sets of
features with their associated values (which may themselves be complex
linguistic structures). These may be represented by directed graphs
or attribute-value matrices (AVMs) whose principal combinatory mode is
the recursive information-merging operation called "unification".
HPSG work in the past quarter falls roughly into the areas of
research, pedagogy, and implementation.
Recent HPSG research has focused upon foundational issues, including:
o The mathematical, computational, and semantic properties of the
AVM formalism in terms of which the theory is couched
o The precise theoretical status of lexical entries,
grammar rules of particular languages, and principles
of universal grammar
o The prospects for bringing HPSG theory within the
compass of current efforts (by members of a semigroup
within the STASS project) to provide axiomatic foundations
for situation theory
The fundamental assumption about the semantics of the HPSG formalism
(which also underlies PATR-II) is that attribute-value matrices denote
types of linguistic objects. Given the close formal analogy between
AVMs and record types in programming languages, it is possible to
bring to bear upon the subject matter of HPSG (and FOG) research the
results of much recent work on the logic, semantics, and type theory
of computation, such as that of Dana Scott (on information systems),
Hassan Ait-Kaci (on the semantics of type structures), and William
Rounds (on the logic of record structures).
In HPSG, where the principle objects of study are taken to be
linguistic data structures called signs, the significance of the AVMs
(or directed graphs, or feature bundles) that a lingist writes down
can be described roughly as follows:
1. A grammar for a given natural language denotes a
disjoint set of types that form a partition of the nonlexical
sign in that language.
2. A hierarchical lexicon such as that employed by HPSG (or the set
of lexical templates in a PATR-II grammar) denotes a set of lexical
sign types (partially ordered by type subsumption). Individual lexical
entries (at the bottom of the hierarchy) then denote minima with
respect to that ordering, i.e., they reflect the finest lexical
distinctions made by the theory (intuitively, lexical items).
3. As Rounds shows, we can view AVMs as terms in a propositional
logic. Principles of universal grammar can then be regarded as
nonlogical axioms which necessarily hold in all natural language sign
systems.
The connection with axiomatic situation theory then arises as follows.
There is a rather natural way in which certain objects in situation
theory called (parametrized) states of affairs can be regarded as
situation types. But again there is a close formal similarity of
(parametrized) states of affairs to (parametrized) record types in
computation. And the relation theory of meaning advocated by
situation semantics makes it possible to view signs as nothing more
than types of linguistic situations. It is therefore interesting to
consider the possibility that much of linguistic theory might fall
within the scope of a suitably axiomatized situation theory. The HPSG
project will continue to explore these and related issues.
HPSG pedagogical efforts have centered on the development of a two-
quarter sequence of core graduate linguistic courses (L221, L230)
(taught by Carl Pollard, Ivan Sag, and Mats Rooth, with the assistance
of Jeff Goldberg) which presents a unified and information-based
account of the syntax and semantics of a number of centrally important
linguistic phenomena, including features and categories,
subcategorization, lexical structure and lexical rules, agreement,
control, quantification, unbounded dependencies, and anaphora. The
course material will be made available in the form of a CSLI Lecture
Notes volume and a volume of readings to be published in the fall.
HPSG implementation has proceeded on two fronts. Development of the
existing implementation at Hewlett-Packard Labs (by Susan Brennan,
Lewis Creary, Dan Flickinger, Lyn Friedman, Dave Goddeau, John
Nerbonne, and Derek Proudian) has focused largely upon expansion of
grammatical coverage, including coordination, reflexive pronouns, and
a number of comparative constructions. At the same time, preparations
are under way for a new HPSG implementation here at CSLI, including
the delivery of the first of twenty Bobcat workstations provided to
CSLI under Hewlett-Packard's University Grant Program, and ongoing
consultation with members of the FOG group on prospects for hosting
the new implementation within a version of the D-PATR (formerly known
as HUG) development environment; actual development is expected to
begin during spring quarter.
-------
∂16-Apr-86 2354 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 Apr 86 23:44:04 PST
Date: Wed 16 Apr 86 16:30:30-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
COMPUTATIONAL MODELS OF SPOKEN LANGUAGE (CMOSL)
Meg Withgott
Project Participants: Marcia Bush, Daniel Huttenlocher, Stuart
Shieber, Meg Withgott (Project Leader)
The fall and winter research of the CMOSL group has concentrated in
large measure on the relationship between linguistic representation
and computational analysis of speech.
We started by observing that abstract representational units (such
as syllables and phonemic segments) appear useful for speech modeling,
even though such linguistic representations have -- at best -- an
indirect realization in the physical signal. These units can be used
to partition a large lexicon for word-candidate hypothesization, or to
specify phonetic deletion and modification sites. Yet it has proven
difficult to build acoustic classifiers reflecting such
representations, and recognition systems generally use less abstract
units.
We explored the argument that the difficulty of classifying
abstract units does not preclude using them in recognition. In
particular, constraint-based systems provide a mechanism for
exploiting abstract linguistic knowledge at the acoustic level. Since
constraint-based models can be used to specify what acoustic
information is consistent with a given abstract unit, they are a
convenient formalism for expressing such knowledge. (This is in
contrast to transformational systems wherein recognition is a
derivation accomplished by mapping between sequences of abstract
representations presupposing a reliably classified signal.)
Constraint-based models appear to provide a simple means for
expressing partial and redundant information. This ability to express
multiple degrees of specificity means the classifier can be allowed to
perform only that classification it can do reliably, while still
maintaining a lexicon based on abstract representational properties in
the model.
Pushing this notion of a classifier "doing only as much as it can",
we conducted a series of experiments to test the reliability with
which arbitrary pieces of the physical signal (we used
vector-quantized LPC spectra) can be mapped to various sets of
abstract linguistic units (acoustic-phonetic classes). The database
for the experiments consisted of approximately 130,000 spectra from a
pre-labeled corpus of 616 connected 5-digit strings, and
classification was performed on the basis of a maximum likelihood
decision rule. Classification accuracy for individual spectra (thus
using no contextual information) ranged from 94.0% for a simple
voiced-voiceless distinction to 42.7% for a set of 45
acoustic-phonetic classes when the same database was used for training
and testing.
We concluded that multidimensional ("cross-classified") abstract
units are desirable as a basis for classification systems in automatic
speech recognition. This is because the identity and grain-size of
the classes can be determined freely, both by what features are the
most useful for discriminating lexical items, and by what classes
prove to be the least confusable for a particular classifier. Such
flexible classification is interesting from the perspective of how
linguistic-phonetic information might be filled in when listening to
ordinary speech.
We have started using the insights from this work in designing a
constraint-based computational model of speech and language.
Reflecting the composition of the group, the CMOSL work is being
carried out in collaboration with the MIT AI lab and Schlumberger Palo
Alto Research.
---------------------
NEW INTERDISCIPLINARY UNDERGRADUATE MAJOR
Stanford is starting a new undergraduate major with intellectual
and institutional links to CSLI. Entitled "Symbolic Systems", the
program will emphasize issues having to do with the representation of
information and its processing by minds and machines.
The Symbolic Systems curriculum includes a required set of core
courses: four in computer science, two in logic, two in philosophy,
two or three in linguistics, and one in psychology. Each student will
also be required to complete a concentration consisting of at least
four additional courses; concentrations may be individually designed
(in consultation with an advisor) or may be selected from the
following list: artificial intelligence, cognitive science,
computation, logic, natural language, philosophical foundations,
semantics, and speech. Several new courses will be developed for the
major, including undergraduate offerings in the philosophy of
language, computational linguistics, the semantics of programming
languages, and ethical issues in the uses of symbolic systems.
Planning for the new program began last summer. A proposal was
drawn up by a committee consisting of Jon Barwise, Herb Clark, John
Etchemendy, Nils Nilsson, Helen Nissenbaum, Stuart Reges, Ivan Sag,
and Tom Wasow. The proposal was approved by the Faculty Senate in
March. Financial support during the planning process was provided by
the Provost's Fund for Innovation in Undergraduate Education. The
School of Humanities and Sciences has made a five-year-commitment for
modest financial support, and potential outside sources of funding are
now being explored.
The Symbolic Systems program committee and affiliated faculty
consist largely of individuals involved in the work at CSLI. They
include all of the members of the planning committee, plus: Phil
Cohen, Solomon Feferman, David Israel, Ron Kaplan, John McCarthy, Ray
Perrault, John Perry, Stanley Peters, Paul Rosenbloom, Stan
Rosenschein, Brian Smith, and Terry Winograd.
---------------------
CSLI POSTDOCTORAL FELLOWS
-------------
Editor's note
Current CSLI Postdoctoral Fellows are: Ivan Blair, PhD from the School
of Epistemics, University of Edinburgh; Carol Clelend, PhD in
Philosophy from Brown University; Mark Gawron, PhD in Linguistics from
the University of California, Berkeley; Helene Kirchner, PhD in
Computer Science from University of Nancy; Christopher Menzel, PhD in
Philosophy from the University of Notre Dame; Mats Rooth, PhD in
Linguistics from the University of Massachusetts; Peter Sells, PhD in
LInguistics from the University of Massachusetts; Edward Zalta, PhD in
Philosophy from the University of Massachusetts.
Three of these fellows are introduced below; the others will be
introduced in following issues.
-------------
CAROL CLELAND
While completing her graduate work at Brown, Cleland was referred
to as the "odd philosopher who was interested in computer science."
Along with her graduate work she was a systems programmer for Prof.
Jim Anderson in connection with his work on neural models, and
designed and taught a course called "Minds and Machines". She heard
rumors that "something funny" was going on at Stanford, and after
looking for a niche in the Wheaton philosophy department and in a
small software company, she called Julius Moravcsik. She was
surprised to learn that at CSLI there were, in fact, a number of
philosophers interested in representation and the nature of
computation and a number of computer scientists interested in
philosophy.
Subsequently, she accepted a CSLI postdoctoral fellowship and began
a year of commuting to CSLI from Santa Rosa -- an hour and forty-five
minutes each way. "That's real dedication", she says.
She found CSLI to be "...like the Tower of Babel with all the
different fields trying to talk to each other". As she hoped,
informal discussions in this environment and participation in various
project meetings (in particular the Representation group and the
Situated Engine Company, a subgroup of STASS) helped develop her
understanding of the nature of computation. For example, she is
currently generalizing her philosophical work on the nature of events
to an account of change, with particular emphasis on the nature of
computational processes. She is also teaching a revised, but again
well-received, version of "Minds and Machines". A CSLI-inspired
project she intends to pursue in the next year is an account of the
nature of representation; since coming here she has changed her mind
and now suspects that computation probably does presuppose
representation.
Later this summer, Cleland will be leaving CSLI to accept a tenure
track position in the department of philosophy at the University of
Colorado, Boulder. She feels this is an ideal position for her -- one
that will allow her to continue her research among a group of
colleagues with similar interests.
IVAN BLAIR
Blair received his PhD from Edinburgh University's School of
Epistemics, which has recently been renamed the Centre for Cognitive
Science; thus an interdisciplinary environment was not new to him.
His view was that the goal of understanding systems of communication,
including natural language, might best be reached by beginning with
systems where separation of the form and content of information has
not progressed to the degree it has in the case of natural language.
He felt that ecological psychology would provide an obvious point of
departure, and detected some sympathy with this point of view in the
work of Jon Barwise, John Perry, and Brian Smith.
During his time at CSLI, Blair has conducted research on
intentionality, broadly construed. He has approached this topic from
the perspective of a (critical) realism, and sought to understand what
a satisfactory account of intentionality that rejects emergent
materialism, reductionist physicalism, or some form of dualism, would
look like. The main focus of his research has been to elucidate the
relation between syntax and semantics (or matter and meaning). He has
studied the work of Howard H. Pattee on symbol-matter systems and
read widely in Gibsonian or ecological psychology.
He enjoys discussions with the other CSLI researchers interested in
philosophical foundations of a theory of information or
intentionality. Blair is a member of the Representation and Reasoning
project and, along with Carol Cleland and Meg Withgott, organized a
reading and discussion group on representation and perception. He
also taught a course in logic for the philosophy department. He feels
he has gained an appreciation of the issues involved in understanding
the nature of intentionality, and of the virtues and problems of
various approaches that have been proposed. He believes that much
more research than is currently underway is required on the
foundational issues germane to the study of cognition, meaning, and
information. Blair considers his own research as a part of this
larger task and as complementing the work of other philosophers at
CSLI on these topics.
Blair plans to return to the United Kingdom to look for an academic
home. He wants to continue thinking through the foundational,
philosophical questions in this interdisciplinary field, so that more
specialized research may have a philosophically sound basis to rest
on.
CHRIS MENZEL
Menzel came to CSLI after completing his doctorate in philosophy at
the University of Notre Dame. He applied for a postdoc at the
suggestion of Howard Wettstein, who had been a visiting scholar in
Stanford's philosophy department, and with whom he had been having
regular meetings to discuss a broad constellation of issues in
metaphysics and the philosophy of language.
His work at CSLI has centered on a number of traditional issues in
the philosophy of logic and mathematics. The major focus of his
research has been the development of a version of the "type-free"
conception of properties and relations so prominent in recent
metaphysics, including, e.g., situation theory. Over the past year he
has developed a complete logic based on this conception (to appear
shortly as a CSLI report), and he is currently applying the logic to
the philosophical issue of the nature of number, and to the semantics
of numerical expressions in English. Other papers completed at CSLI
are: "On the Iterative Explanation of the Paradoxes", Philosophical
Studies 49, (1986), 37-61; "Paradoxes, Large Sets, and Proper
Classes", delivered at the Eastern Meeting of the APA, December 1985;
and "On Set Theoretic Possible Worlds", forthcoming in "Analysis".
Menzel has taught two courses at Stanford during his tenure at
CSLI. In 1984-85, he taught an introductory course on the theory of
computability. In 1985-86, at the behest of Stanford's philosophy
department chairman, who wished to take advantage of Menzel's eclectic
philosophical interests, he taught an undergraduate course entitled
"Philosophy, Theology, and Religious Belief".
Menzel is most enthusiastic about the opportunities he has had to
work with researchers he previously assumed he would know only through
publications. He feels his research has taken turns it could not have
taken in a non-interdisciplinary environment or without the
computational equipment and large blocks of research time CSLI has
provided. He looks forward to beginning an assistant professorship in
the philosophy department of Texas A&M University this fall,
where, in addition to his teaching duties, he plans to continue his
research -- the next chunk being problems of modality in logic,
semantics, and situation theory.
---------------------
CSLI SNAPSHOTS: PAT HAYES
CSLI's research is greatly enhanced by the participation of
scholars and scientists employed by Bay Area institutes other than its
"founding" institutions: SRI International, Stanford, and Xerox PARC.
Someday CSLI may get around to formalizing criteria for membership
that clarify the value placed on the participation of folk not
officially included in current CSLI grants. That this participation
should be valued is obvious, since among these folk are some of the
most exciting and interesting members of the worldwide "information
and cognition" community, such as Pat Hayes.
Hayes is a member of the technical staff at Schlumberger. He
arrived last summer from the University of Rochester where he was the
Henry E. Luce Professor of Cognitive Science. While he enjoyed the
support of an interdisciplinary environment at Rochester, he was
attracted by what appeared from a distance to be vast amounts of work
in AI going on in several Bay Area locations. The Bay Area, and CSLI
in particular, "seemed unique in securing the commitment of lots of
good researchers from a variety of disciplines to talk to each other".
He has found that his view from a distance was accurate. He says that
each week contains one-and-a-half weeks worth of events which it would
be unthinkable to miss in any other location -- even if it meant
driving an hour or so each way.
Hayes' research goal is to formalize a commonsense physical
knowledge base, i.e., a knowledge base of "naive physics". He wanted
to expand the theoretical aspects of his research in the context of
applied products, and has found environments for exactly that at
Schlumberger and CSLI. At Schlumberger he is working on a project to
design an interactive knowledge base of theorems and metatheorems of
different aspects of the everyday physical world. At CSLI, he is
participating in the Rational Agency and SEC (Situated Engine Company,
a subgroup of STASS) groups, discussing questions of representational
foundations and their semantics.
One of the special side benefits of Hayes' presence at CSLI is the
"unexplained" appearance now and then of wonderful portraits of CSLI
researchers. These may be on paper cups or any other handy medium.
Hayes' portraits capture the essence of the researchers just as his
comments and questions capture the essence of the issues under
discussion as the sketch is being completed.
---------------------
-------
∂17-Apr-86 0038 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 2, part 7 (and last)
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 17 Apr 86 00:38:03 PST
Date: Wed 16 Apr 86 16:31:06-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 2, part 7 (and last)
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
CSLI VISITING SCHOLARS
CSLI benefits from the active participation of a number of visiting
scholars from all parts of the world. These visitors may stay a few
weeks or as much as a year or more. Typically, there are a dozen on
site at any one time. The following scholars were on site during
March and April:
Dorit Abusch
Lecturer at Tel Aviv University
Dates of visit: Summer, 1985, and February - August, 1986
Abusch is participating in several of the syntax and semantics groups
and is completing a paper on the semantics of tense and aspect.
Peter Aczel
Professor of Mathematics
Manchester University
Dates of visit: Winter quarter of 1984/85, and March, 1986
Aczel visited for a quarter last year, when he presented his lectures
on the anti-foundation axiom in set theory, and related work on
processes in computer science. He returned this year to prepare the
notes for these lectures for a CSLI Lecture Notes volume, and to work
on a paper with Barwise on the mathematics of shared information.
Haim Gaifman
Professor of Mathematics
Hebrew University
Dates of visit: Academic year, 1985/86
Gaifman is working on many issues in the logic of computer science,
and is also involved with the Situated Automata project. He has
lectured on a new approach to a truth definition for circularity, one
that he calls the logic of pointers, and on a hierarchy in inductive
definability on finite structures, where one keeps track of the number
of parameters and variables in the definitions.
Claudia Gerstner
University of Munich
Dates of visit: Academic year, 1985/86
Gerstner is pursuing research in theoretical linguistics, in
particular, in universal aspects of generic constructions in language;
and is translating Situations and Attitudes into German.
Roland Hausser
Privatdozent at the Institut fur Deutsche Philologie
University of Munich
Dates of visit: Fall and spring quarters, 1984/85, fall and
spring quarters, 1985/86
Hausser has been working a left-associative approach to the syntax and
semantics of natural language, and is completing a manuscript to
appear soon in Springer-Verlag's series, "Lecture Notes in Computer
Science".
Jens Kaasboll
Research Associate in Computer Science
University of Oslo
Dates of visit: Winter quarter, 1986
Kaasboll came to CSLI to further his research in the SYDPOL (System
Development and Profession-Oriented Languages) project by learning
about the linguistic approaches to system description being developed
here, and to provide CSLI with his insights into actual system
settings (such as a nursing ward, where he did his dissertation
study).
Birgit Landgrebe
Mathematics Department
Aarhus University
Dates of visit: December, 1985 - July, 1986
Landgrebe is pursuing her study of language development environments
through participation in the Semantics of Computer Languages project
where she is developing and integrating an attribute evaluation module
for its MUIR system.
Godehard Link
Professor of Philosophy
University of Munich
Dates of visit: Academic year, 1985/86
Link's current research project, "Algebraic Semantics", is closely
related to some basic questions about the nature of information. He
says that CSLI's strong emphasis on foundational issues in the field
of semantics has led him to rethink some methodological problems
concerning language and information, and to put his own semantical
work in a broader perspective.
Kim Halskov Madsen
Computer Science Department
Aarhus University
Dates of visit: March - July, 1986
Madsen is working on systems description languages and the
identification of structured domains. He is collaborating with Terry
Winograd and is in the process of writing a paper tentatively
entitled, Breakthrough by Breakdown: Structured Domains, Metaphors,
and Frames.
Kurt Normark
Computer Science Department
Aarhus University
Dates of visit: Academic years, 1984/85 and 1985/86
Normark's research interest is in program development tools,
especially on graphical workstations and in man/machine interactions
on workstations with pointing devices. Currently, he is participating
in the Semantics of Computer Languages project, and is especially
interested in semi-automatic and interactive tools for program
creation from specifications.
Gordon Plotkin
Professor of Computer Science
University of Edinburgh
Dates of visit: January, 1984 - January, 1985, and Spring quarter, 1986
Plotkin is working on applications of Aczel's notion of Frege
Structure, and, with Carl Pollard, on applications of domain theory to
model certain axioms of situation theory.
Chris Swoyer
Professor of Philosophy
University of Oklahoma
Dates of visit: Spring quarter, 1986
Swoyer has been working on properties and their role in accounts of
measurement, information, supervenience, and in the philosophy of mind
and in the semantics of natural language. He feels his work on
properties has turned out to be quite compatible with a number of
aspects of situation theory.
Dag Westerstahl
Professor of Philosophy
University of Goteborg
Dates of visit: September, 1985 - February, 1986
Westerstahl's field of research includes abstract model theory,
generalized quantifiers, natural language semantics, and processing.
He came to CSLI to become better acquainted with the notion of
"situated language", and has been running the AFA (Anti-Foundation
Axiom) seminar which has been discussing Aczel's Notes and Barwise and
Etchemendy's monograph on the Liar Paradoxes. "I already knew that
situation theory interested me before I came here, but there are so
many aspects and nuances of a developing field of research that you
can only perceive if you're on the spot. Obviously my stay here has a
great influence on my work in natural language semantics and
processing, both with respect to specific research problems and
methods, and the general outlook on language".
Dietmar Zaefferer
Professor of Linguistics
University of Munich
Dates of visit: April, 1984 - March, 1986
Zaefferer has been working on the philosophy of language,
investigating the semantics of declaratives and exclamatories. He
recently summarized some aspects of his work in a CSLI seminar
entitled "The Structural Meaning of Clause Type: Capturing Cross-modal
and Cross-linguistic Generalizations".
---------------------
NEW CSLI PUBLICATIONS
Reflecting the research that has been done at the Center, nearly 50
CSLI Technical Reports, four titles in the Lecture Notes series, and
five titles in the Informal Notes series have been published to date.
The most recent Reports are listed below; the Reports and a complete
list of publications can be obtained by writing to Trudy Vizmanos,
CSLI, Ventura Hall, Stanford, CA 94305, or Trudy@su-csli.
46. Constraints on Order
Hans Uszkoreit
47. Linear Precedence in Discontinuous Constituents:
Complex Fronting in German
Hans Uszkoreit
The titles in the Lecture Notes Series are distributed by the
University of Chicago Press and may be purchased in academic or
university bookstores, or ordered directly from the distributor at
5801 Ellis Avenue, Chicago, IL 60637. The most recent publication in
this series is:
Lectures on Contemporary Syntactic Theories
Peter Sells Paper $12.95 Cloth $23.95
---------------------
LETTERS TO THE EDITOR
CSLI MONTHLY I AND THE DELICATE ART OF CARICATURE
Madame editor:
Thurber tells us a lot about language and information. For instance,
he reminds us of E. B. White's comment that "...humorous writing, like
poetical writing, has extra content. It plays, like an active child,
close to the big hot fire which is Truth." So it is not surprising
that there were a few scorched highbrows after collective perusal of
the first CSLI Monthly, especially the part after the section on the
"convergence of theories and ideas":
Imagine a typical philosopher, a typical linguist, and a typical
computer scientist. The philosopher is happy with low-key funky
surroundings, and can't be bothered with machinery, relying
instead on books, paper, and number 2 pencils. The linguist is
accustomed to low-key funky surroundings, and is content in any
setting where there are other linguists, coffee, and devices
(blackboards, whiteboards, or computers) that can handle trees or
functional diagrams. The computer scientist has become part of
the wonderful new technology s/he has helped to develop, to the
extent that s/he can't even imagine how to communicate with the
person at the next desk when the computer is down.
Folks said to themselves, "Gee, since I always believe E. B. White,
and since this is humorous, it must be true." This caused undue
consternation, not to mention identity crises.
Not to worry. We have learned that this humorous writing was
propagated by a philosopher. Paraphrasing Thurber (at least his
syntax): Since the nature of humor is obviously anti-philosophic, just
as the nature of philosophy is anti-humor, such philosophical penning
amounts, in effect, to anti-humorous humorous writing.
So E. B. White stands, and we can sit down and relax. (But the part
about the number 2 pencils remains as good advice to philosophers,
given that pencils' delete-functions work even during power failures.)
(Signed)
A Computational Linguist-Philosopher
-------------
We asked the author of the offending paragraph for a reply, which he
wrote, and then erased. --ed.
-------------
To the editor:
I have enjoyed getting CSLI's publications, though I seldom have time
to devour them. But I especially liked the format and contents of the
new CSLI Monthly.
Keep up the good work --
James Rosse
Provost, Stanford University
---------------------
--Elizabeth Macken
Editor
-------
∂17-Apr-86 0118 EMMA@SU-CSLI.ARPA Calendar, April 17, No. 12
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 17 Apr 86 01:16:59 PST
Date: Wed 16 Apr 86 18:18:59-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, April 17, No. 12
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
April 17, 1986 Stanford Vol. 1, No. 12
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, April 17, 1986
12 noon TINLunch
Ventura Hall Understanding Computers and Cognition
Conference Room by Terry Winograd and Fernando Flores
Discussion led by Brian Smith (Briansmith.pa@xerox)
2:15 p.m. CSLI Seminar
Ventura Hall Representation: On Stich's Case Against Belief
Trailer Classroom John Perry (John@su-csli)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Turing Auditorium Intention, Belief and Practical Reasoning
Hector-Neri Castaneda, Indiana University
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, April 24, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Models, Modelling, and Model Theory
Trailer Classroom John Etchemendy and Jon Barwise
(Etchemendy@su-csli, Barwise@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Seminar
Ventura Hall Lexical Rules and Lexical Representations
Trailer Classroom Annie Zaenen (zaenen.pa@xerox)
(Abstract on page 2)
--------------
ANNOUNCEMENT
Please note that the colloquium for this week is in Turing Auditorium.
Note also that there is no colloquium for next week, but that the
seminar originally scheduled for March 6 will take place instead.
!
Page 2 CSLI Calendar April 17, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S SEMINAR
Representation: On Stich's Case Against Belief
John Perry (John@su-csli)
This Fall I gave a seminar on Steven Stich's book, ``From Folk
Psychology to Cognitive Science''. I have quite a bit of material
from the seminar, which had the active participation of a number of
members of the representation group. I am having a little bit of a
problem deciding what to present tomorrow. The criticisms of Stich
tend to be definitive (I think), but perhaps not of such wide general
interest. There are also a number of sketchy positive ideas, much
less definitive but probably of wider interest, mainly on the issue of
what sort of attributions of content to minds require attribution of
representations. The seminar will deal either with criticisms of
Stich, or the sketchy ideas, or some combination of the two, or
something new that crops up between now and then. I will do my best
to make it either polished and definitive or sketchy but provocative,
but I doubt that I will manage to do both.
--------------
CSLI SEMINAR
Lexical Rules and Lexical Representations
Mark Gawron, Paul Kiparsky, Annie Zaenen
4:15 p.m., April 24, CSLI Trailer Classroom
This is the third of a series of talks reflecting the ongoing
elaboration of a model of lexical representation. In the first, Mark
Gawron discussed a frame-based lexical semantics and its relationship
to a theory of lexical rules. In the second, Paul Kiparsky proposed a
theory of the linking of thematic roles to their syntactic realizations,
emphasizing its interactions with a theory of morphology; and in this
one, a sub-workgroup of the lexical project will sketch a unification
based representation for the interaction of the different components
of the lexical representation and both syntax and sentence semantics.
This seminar was originally scheduled for March 6.
--------------
AFT TALK
On Belief Context and Identity
Nathan Salmon, UCSB
Ventura Conference Room
11 a.m. - 1 p.m., Tuesday, April 22
-------
∂23-Apr-86 1813 EMMA@SU-CSLI.ARPA Calendar, April 24, No. 13
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 23 Apr 86 18:13:19 PST
Date: Wed 23 Apr 86 17:41:01-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, April 24, No. 13
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
April 24, 1986 Stanford Vol. 1, No. 13
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, April 24, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Uses and Abuses of Models in Semantics
Trailer Classroom John Etchemendy and Jon Barwise
(Etchemendy@su-csli, Barwise@su-csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Lexical Rules and Lexical Representations
Trailer Classroom Annie Zaenen (zaenen.pa@xerox)
(Originally scheduled as a CSLI Seminar on March 6)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, May 1, 1986
12 noon TINLunch
Ventura Hall Selections from On the Plurality of Worlds
Conference Room by D. Lewis
Discussion led by Ed Zalta (Zalta@su-csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Visual Communication (Part 1 of 3)
Trailer Classroom Sandy Pentland and Fred Lakin
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Ventura Hall Structures in Written Language
Trailer Classroom Geoff Nunberg
--------------
!
Page 2 CSLI Calendar April 24, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S SEMINAR
Uses and Abuses of Models in Semantics
Jon Barwise and John Etchemendy
Barwise@su-csli and Etchemendy@su-csli
The use of set-theoretic models as a way to study the semantics of
both natural and computer languages is a powerful and important
technique. However, it is also fraught with pitfalls for those who do
not understand the nature of modeling. In this talk we hope to show
how a proper understanding of the representation relationship implicit
in modeling can help one exploit the power while avoiding the
pitfalls. Learn how to disarm your foes and impress your friends at
one go. The talk will presuppose some familiarity with the techniques
under discussion.
--------------
NEXT WEEK'S TINLUNCH
Selections from On The Plurality of Worlds
by D. Lewis
Discussion led by Ed Zalta (Zalta@su-csli)
Lewis' new book, On The Plurality of Worlds, contains a defense of
his modal realism, the thesis that the world we are part of is but one
of a plurality of worlds, and that we who inhabit this world are only
a few out of all the inhabitants of all the worlds. In this TINLunch,
I'll describe the overall plan of the book, and then focus both on
some of Lewis' replies to objections or on his objections to the
program of ``ersatz modal realism,'' in which other worlds are
replaced by representations of some sort.
--------------
NEXT WEEK'S SEMINAR
Visual Communication
Sandy Pentland, Fred Lakin, Guest Speakers
May 1, 8, and 15
Speakers in this series will discuss and illustrate ongoing research
concerned with mechanisms of visual communication and visual languages
and the identification of visual regularities that support the
distinctions and classes necessary to general-purpose reasoning. Alex
Pentland will discuss how organizational regularities in human
perception can be used to facilitate a rational computer system for
3-D graphics modelling. Fred Lakin will describe a Visual
Communication Lab, and, in particular, a project to construct visual
grammars for visual languages. Examples show the use of these
grammars to recognize and parse ``blackboard'' diagrams.
!
Page 3 CSLI Calendar April 24, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
PIXELS AND PREDICATES
Prolog and Geometry
Randolph Franklin, UC at Berkeley
wrf@degas.berkeley.edu
1:00 p.m., Tuesday, April 29, CSLI trailers
(Note change in day)
The Prolog language is a useful tool for geometric and graphics
implementations because its primitives, such as unification, match the
requirements of many geometric algorithms. We have implemented
several problems in Prolog including a subset of the Graphics Kernal
Standard, convex hull finding, planar graph traversal, recognizing
groupings of objects, and boolean combinations of polygons using
multiple precision rational numbers. Certain paradigms, or standard
forms, of geometric programming in Prolog are becoming evident. They
include applying a function to every element of a set, executing a
procedure so long as a certain geometric pattern exists, and using
unification to propagate a transitive function. Certain strengths and
weaknesses of Prolog for these applications are now apparent.
-------
∂25-Apr-86 0947 EMMA@SU-CSLI.ARPA Logic seminar
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 25 Apr 86 09:47:23 PST
Date: Fri 25 Apr 86 09:01:30-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Logic seminar
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
note: testing
Seminar in Logic and Foundations of Mathematics
Speaker: Prof. Michael Beeson, San Jose State, visiting Stanford
Title: Toward a computation system based on set theory
Time: Tuesday, April 29, 4:15-5:30
Place: Third floor lounge, Math Dept Bldg 380, Stanford.
S. Feferman
-------
-------
∂28-Apr-86 1000 EMMA@SU-CSLI.ARPA CSLI Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 28 Apr 86 09:58:38 PDT
Date: Mon 28 Apr 86 09:21:16-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Calendar update
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
note: testing
CSLI COLLOQUIUM
Structures in Written Language
Geoff Nunberg
4:15, Thursday, May 1, Redwood G-19
Just about all contemporary research on linguistic structure has
been based exclusively on observations about the spoken language; the
written language, when it is talked about at all, is generally taken
to be derivative of speech, and without any independent theoretical
interest. When we consider the written language in its own terms,
however, it turns out to have a number of distinctive features and
structures. In particular, it contains a number of explicitly
delimited "text categories," such as are indicated by the common
punctuation marks and related graphical features, which are either
wholly absent in the spoken language, or at best are present there
only implicitly. In the course of uncovering the principles that
underlie the use of text categories like the text-sentence, paragraph,
and parenthetical (i.e., a string delimited by parentheses), we have
to provide three levels of grammatical description: a semantics, which
sets out the rules of interpretation associated with text categories
by associating each type with a certain type of informational unit; a
syntax, which sets out the dependencies that hold among
category-types; and a graphology, which gives the rules that determine
how instances of text categories will be graphically presented. Each
of these components is a good deal more complex and less obvious than
one might suppose on the basis of a recollection of what the didactic
grammars have to say about the written language; what emerges, in
fact, is that most of the rules that determine how text delimiters are
used are not learned through explicit instruction, and are no more
accessible to casual reflection than are the rules of grammar of the
spoken language.
(Please ignore the note in my header saying testing; I'm having a bit
of a tussle with my mailer at the moment.)
-------
∂30-Apr-86 1803 EMMA@SU-CSLI.ARPA Calendar, May 1, No. 14
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 30 Apr 86 17:48:43 PDT
Date: Wed 30 Apr 86 17:01:30-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, May 1, No. 14
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
May 1, 1986 Stanford Vol. 1, No. 14
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, May 1, 1986
12 noon TINLunch
Ventura Hall Selections from ``On the Plurality of Worlds''
Conference Room by D. Lewis
Discussion led by Ed Zalta (Zalta@su-csli)
2:15 p.m. CSLI Seminar
Ventura Hall Visual Communication (Part 1 of 3)
Trailer Classroom Sandy Pentland (Pentland@sri-ai)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Structures in Written Language
Room G-19 Geoff Nunberg (Nunberg@csli)
(Abstract on page 2)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, May 8, 1986
12 noon TINLunch
Ventura Hall Definiteness and Referentiality
Conference Room Vol. 1, Ch. 11 of ``Syntax: A
Functional-Typological Introduction''
by Talmy Givon
Discussion led by Mark Johnson (Johnson@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall On Visual Communication (Part 2 of 3)
Trailer Classroom David Levy, Xerox PARC (Dlevy.pa@xerox)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Whither CSLI?
Room G-19 John Perry, Director, CSLI
(Abstract on page 3)
--------------
!
Page 2 CSLI Calendar May 1, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S COLLOQUIUM
Structures in Written Language
Geoff Nunberg (Nunberg@csli)
Just about all contemporary research on linguistic structure has
been based exclusively on observations about the spoken language; the
written language, when it is talked about at all, is generally taken
to be derivative of speech, and without any independent theoretical
interest. When we consider the written language in its own terms,
however, it turns out to have a number of distinctive features and
structures. In particular, it contains a number of explicitly
delimited ``text categories,'' such as are indicated by the common
punctuation marks and related graphical features, which are either
wholly absent in the spoken language, or at best are present there
only implicitly. In the course of uncovering the principles that
underlie the use of text categories like the text-sentence, paragraph,
and parenthetical (i.e., a string delimited by parentheses), we have
to provide three levels of grammatical description: a semantics, which
sets out the rules of interpretation associated with text categories
by associating each type with a certain type of informational unit; a
syntax, which sets out the dependencies that hold among category-types;
and a graphology, which gives the rules that determine how instances
of text categories will be graphically presented. Each of these
components is a good deal more complex and less obvious than one might
suppose on the basis of a recollection of what the didactic grammars
have to say about the written language; what emerges, in fact, is that
most of the rules that determine how text delimiters are used are not
learned through explicit instruction, and are no more accessible to
casual reflection than are the rules of grammar of the spoken
language.
--------------
NEXT WEEK'S TINLUNCH
Definiteness and Referentiality
Vol. 1, Ch. 11 of
Syntax: A Functional-Typological Introduction
by Talmy Givon
Discussion led by Mark Johnson (Johnson@csli)
The relationship between syntactic structure and meaning is one of
the most interesting lines of research being undertaken here at CSLI.
One of the questions being addressed in this work concerns the way
that grammatical or syntactic properties of an utterance interact with
its semantics, i.e., what it means. Givon and others claim that
discourse notions of topicality and definiteness interact strongly
with grammatical processes such as agreement---and moreover, that
there is no clear dividing line between grammar and discourse; one
cannot understand agreement or anaphora viewing them as purely
grammatical processes. Linguists here at CSLI are tentatively moving
toward this position, for example Bresnan and Mchombo (1986) make
explicit use of a theory of ``discourse functions'' to explain the
distributional properties of Object Marking in Chichewa, so a
discussion of what it would mean to have an ``integrated'' theory of
language is quite timely.
!
Page 3 CSLI Calendar May 1, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Givon's treatment of definiteness and referentiality explicitly
rejects earlier philosphical treatments as being ``too restrictive to
render a full account of the facts of human language.'' He starts by
listing some observations on the interactions between definiteness and
a variety of other linguistic phenomena (e.g. modality) and goes on to
propose a model based on a ``Universe of Discourse'' and the notion of
``referential intent.'' After examining examples of how
referentiality is coded in various languages and how it interacts with
various other syntactic and semantic phenomena, he finishes by
discussing degrees of definiteness and referentially, and introduces
the notion of communicative importance.
This chapter raised several interesting questions. For example,
what are the key properties of referentiality and definiteness, and
how would one go about building a theory that expresses them? What
are Givon's insights into this matter, and how could these be
reconstructed within a formal theory such as DRS theory or Situation
Semantics?
--------------
NEXT WEEK'S SEMINAR
On Visual Communication
David Levy, Xerox Palo Alto Research Center (Dlevy.pa@xerox)
Lately there has been much talk around CSLI about representation as
a concept transcending and unifying work being done in different
research groups and domains. Various points have emerged and recurred
in recent presentations and discussions: the distinction between the
representing state of affairs (A) and the state of affairs represented
(B); examples of the dangers inherent in conflating them; forms of
structural correspondence between aspects (objects, properties, and
relations) of A and aspects of B; the partiality of representation
(the fact that only certain aspects of A correspond to aspects of B,
and that only certain aspects of B correspond to aspects of A); the
priority of B over A; and so on.
The use of computers is largely mediated by representations. Many
of these are transparent to us: We talk of ``typing an A'' when we
actually press a key, causing a character code (a character
representation) to be generated from which an actual character is
rendered. We talk of ``viewing'' data structures, when in fact we do
nothing of the sort, since data structures ``inside'' machines are
inherently non-visual, much as are mental states ``inside'' heads;
rather, we view *visual representations* of data structures.
In many contexts the transparency of representations (leading to
the conflation of A and B) is tremendously useful and powerful. The
term ``direct manipulation'' denotes a style of user interface design in
which the user is led (or encouraged) to conflate the visual objects
on the screen (e.g. icons) with the things they represent (e.g.
printers), and to conflate the representation of these visual objects
with the visual objects themselves. But there seem to be times when
our facility for seeing through representations is a hindrance rather
than a help, as Barwise and Etchemendy argued recently for the case of
model theory.
!
Page 4 CSLI Calendar May 1, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
As a theoretician and observer of certain classes of computer
systems, and, equally importantly, as a *designer* of them, I believe
that we need an understanding of representation (and of the sorts of
issues described in the first paragraph) to help us build truly
rational systems. In this talk I will focus on the problem of
developing an analysis of visual representation. I will use examples
from the surface of computer screens (e.g. windows, scroll bars, and
icons) to illustrate the importance of distinctions such as visual vs.
non-visual entities, representing vs. represented entities, and
(active) processes vs. (static) representation relations.
------------
NEXT WEEK'S COLLOQUIUM
Whither CSLI?
John Perry, Director, CSLI
In this talk, I will try to bring everyone interested enough to come
up to date on several issues regarding CSLI's long range and not so
long range future, specifically:
1. What we are going to do for money when the SL grant from
SDF runs out.
2. What we are going to do for space when the permit for the
``trailers'' runs out.
3. Issues connected with CSLI's governance and ontological
status, or, ``Can Augustine's account of the trinity be
adapted for the CSLI environment?,'' or ``Who wants to
be the Holy Ghost?''
------------
LOGIC SEMINAR
Dynamic Algebras and the Problem of Induction
Vaughan Pratt, Dept. of Computer Science, Stanford
4:15, Tuesday, May 6, Math. Dept. 383-N
-------
∂01-May-86 1419 EMMA@SU-CSLI.ARPA Calendar updates
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 1 May 86 14:19:48 PDT
Date: Thu 1 May 86 13:23:09-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar updates
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
Three messages:
CSLI Talk
Verbs of Change and the Semantics of Aspect
Dorit Abusch, Tel Aviv and CSLI
CSLI Seminar Room, 10:45, Tuesday, May 6
------------
The title of David Levy's seminar next Thursday at 2:15 is "On Visual
Representation" not "On Visual Communication" as announced in the
Calendar.
------------
The title of the Logic Seminar by Vaughan Pratt is "Dynamic Algebras
and the Nature of Induction" not "Dynamic Algebras and the Problem of
Induction" as announced in the Calendar. The Logic Seminar is
Tuesday, May 6 at 4:15 in Math. Bldg. 383-N.
-------
∂07-May-86 1715 EMMA@SU-CSLI.ARPA Calendar, May 8, No. 15
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 7 May 86 17:15:41 PDT
Date: Wed 7 May 86 16:19:16-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, May 8, No. 15
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
May 8, 1986 Stanford Vol. 1, No. 15
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, May 8, 1986
12 noon TINLunch
Ventura Hall Definiteness and Referentiality
Conference Room Vol. 1, Ch. 11 of ``Syntax: A
Functional-Typological Introduction''
by Talmy Givon
Discussion led by Mark Johnson (Johnson@csli)
2:15 p.m. CSLI Seminar
Ventura Hall On Visual Representation (Part 2 of 3)
Trailer Classroom David Levy, Xerox PARC (Dlevy.pa@xerox)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Whither CSLI?
Room G-19 John Perry, Director, CSLI
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, May 15, 1986
12 noon TINLunch
Ventura Hall A Critique of Pure Reason
Conference Room by Drew McDermott
Discussion led by Pat Hayes (PHayes@sri-kl)
(Abstract next week)
2:15 p.m. CSLI Seminar
Ventura Hall Beyond the Chalkboard: Computer Support for
Trailer Classroom Collaboration and Problem Solving in Meetings
(Part 3 of 3)
Mark Stefik, Intelligent Systems Lab., Xerox PARC
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Transfer of f-structures Across Natural Languages
Room G-19 Tom Reutter, Weidner Communications Corp., Chicago
(Abstract on page 2)
--------------
!
Page 2 CSLI Calendar May 8, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S SEMINAR
Beyond the Chalkboard:
Computer Support for Collaboration and Problem Solving in Meetings
Mark Stefik
Intelligent Systems Laboratory, Xerox Palo Alto Research Center
Computers for individuals are widely used. During meetings, however,
we leave them behind and rely on passive media such as chalkboards.
An experimental meeting room called the Colab has been created at
Xerox PARC. It is for studying computer support of collaborative
problem-solving in face-to-face meetings. The long-term goal is to
understand how to build computer tools to make meetings more
effective. This talk is about several dimensions of the Colab
project, including the physical setting, the special hardware and
software that have been created, the principles and technical results
that have emerged in the work so far, and some preliminary
observations about the first Colab meetings.
--------------
NEXT WEEK'S COLLOQUIUM
Transfer of f-structures Across Natural Languages
Tom Reutter, Weidner Communications Corp., Chicago
A recursive algorithm for mapping functional structure from a
source natural language into a target natural language is presented
and its implementation in the programming language CPROLOG is
discussed. The transfer algorithm is guided by a symmetrical
bilingual lexicon. It was prototypically implemented for
German-English as part of a transfer-oriented machine translation
system at the University of Stuttgart (Germany). Special emphasis is
placed on asymmetiral transfer, e.g., mapping of f-structures with
different semantic valencies, unequal NUM and SPEC attributes, etc.
------------
LOGIC SEMINAR
Relationships Between Frege Structures and
Constructive Theories of Functions and Classes
Solomon Feferman
4:15, Tuesday, May 13, Math. Dept. 383-N
-------
∂08-May-86 1413 EMMA@SU-CSLI.ARPA Late Announcement
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 8 May 86 14:13:14 PDT
Date: Thu 8 May 86 13:30:07-PDT
From: julius
Subject: Late Announcement
Sender: EMMA@SU-CSLI.ARPA
To: friends@SU-CSLI.ARPA
Reply-To: julius@csli
Tel: (415) 723-3561
PHILOSOPHY TALK
Truth, Paradox, and Partially Defined Predicates
Scott Soames, Princeton University
Tuesday, May 13, 10:45-12:00, Ventura Seminar Room
Followed by a discussion in the Philosophy Lounge
-------
∂09-May-86 0907 EMMA@SU-CSLI.ARPA Psychology Seminar
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 9 May 86 09:07:36 PDT
Date: Fri 9 May 86 08:16:05-PDT
From: dirk@su-psych
Subject: Psychology Seminar
Sender: EMMA@SU-CSLI.ARPA
To: friends@SU-CSLI.ARPA
Reply-To: dirk@su-psych
Tel: (415) 723-3561
Return-Path: <dirk@su-psych.arpa>
Received: from su-psych.arpa by SU-CSLI.ARPA with TCP; Thu 8 May 86 15:56:49-PDT
Received: by su-psych.arpa with Sendmail; Thu, 8 May 86 16:01:20 pdt
Date: Thu, 8 May 86 16:01:20 pdt
From: dirk@SU-PSYCH (Dirk Ruiz)
Subject: This Week's Psychology Dept. Friday Seminar.
To: friends@csli
Our speaker this week is Martin Braine. Time and place are 3:15, Friday
May 9 in room 100, Jordan Hall. Title and abstract follow.
---------------------------------------------------------------------------
A lexical entry for "if";
Some data on reasoning to a conditional conclusion in children and adults
Martin Braine
A psychological theory of a logical particle should have three parts:
(1) a lexical entry, which specifies the information about the meaning of
the particle carried in semantic memory; (2) a theory of the pragmatic
comprehension processes that, taken with the lexical entry, lead to
construal in context; and (3) a reasoning program that models subjects'
typical modes of reasoning on stimulus materials used in experiments. A
theory of "if" of this sort will be presented, and used to account for some
intuitions and developmental data on inferences, truth judgments, and
comprehension errors. In addition, some experiments will be reported in
which children and adults reason to an "if"-statement as conclusion.
---------------------------------------------------------------------------
-------
∂13-May-86 0937 EMMA@SU-CSLI.ARPA Van Nguyen talk
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 13 May 86 09:37:24 PDT
Date: Tue 13 May 86 08:59:04-PDT
From: Olender@sri-ai
Subject: Van Nguyen talk
Sender: EMMA@SU-CSLI.ARPA
To: friends@SU-CSLI.ARPA
Reply-To: olender@sri-ai
Tel: (415) 723-3561
Return-Path: <OLENDER@SRI-AI.ARPA>
Received: from SRI-AI.ARPA (SRI-STRIPE.ARPA.#Internet) by SU-CSLI.ARPA with TCP; Mon 12 May 86 20:37:47-PDT
Date: Mon 12 May 86 20:35:16-PDT
From: Margaret Olender <OLENDER@SRI-AI.ARPA>
Subject: TALK BY VAN NGUYEN
To: aic-associates@SRI-AI.ARPA, friends@SU-CSLI.ARPA
cc: nguyes@IBM.COM
DATE: May 14, 1986
TIME: 4:15pm
TITLE: "Knowledge, Communication, and Time"
SPEAKER: Van Nguyen
LOCATION: SRI International
Ravenswood Avenue
Building E
CONFERENCE ROOM: EJ228
COFFEE: Waldinger's Office
EK292
3:45pm
-----------------------------------------------------------------------------
KNOWLEDGE, COMMUNICATION, AND TIME
Van Nguyen
IBM Thomas J. Watson Research Center
(Joint work with Kenneth J. Perry)
Abstract
The role that knowledge plays in distributed systems has come under
much study recently. In this talk, we re-examine the commonly
accepted definition of knowledge and examine how appropriate it is for
distributed computing. Motivated by the draw-backs thus exposed, we
propose an alternative definition that we believe to be better suited
to the task. This definition handles multiple knowers and makes
explicit the connection between knowledge, communication, and time.
It also emphasizes the fact that knowledge is a function of one's
initial knowledge, communication history and deductive abilities. The
need for assuming perfect reasoning is mitigated.
Having formalized these links, we then present the first proof
system for programs that incorporates both knowledge and time. The
proof system is compositional, sound and relatively complete, and is
an extension of the Nguyen-Demers-Gries-Owicki temporal proof system
for processes. Suprisingly, it does not require proofs of
non-interference (as first defined by Owicki-Gries).
-------
-------
∂14-May-86 1710 EMMA@SU-CSLI.ARPA Calendar, May 15, No. 16
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 14 May 86 17:10:40 PDT
Date: Wed 14 May 86 16:46:19-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, May 15, No. 16
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
May 15, 1986 Stanford Vol. 1, No. 16
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, May 15, 1986
12 noon TINLunch
Ventura Hall A Critique of Pure Reason
Conference Room by Drew McDermott
Discussion led by Pat Hayes (PHayes@sri-kl)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Beyond the Chalkboard: Computer Support for
Trailer Classroom Collaboration and Problem Solving in Meetings
(Part 3 of 3)
Mark Stefik, Intelligent Systems Lab., Xerox PARC
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Transfer of f-structures Across Natural Languages
Room G-19 Tom Reutter, Weidner Communications Corp., Chicago
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, May 22, 1986
12 noon TINLunch
Ventura Hall Stalnaker on the Semantics of Conditionals
Conference Room Ch 7 ``Conditional Propositions,'' Inquiry
by Robert Stalnaker
Discussion led by Chris Swoyer (Swoyer@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Events and Modes of Representing Change
Trailer Classroom Carol Cleland (Cleland@csli)
(Abstract in next week's calendar)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Title to be announced
Room G-19 Nick Negroponte, MIT Media Lab.
--------------
!
Page 2 CSLI Calendar May 15, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S TINLUNCH
A Critique of Pure Reason
by Drew McDermott
Discussion led by Pat Hayes (PHayes@sri-kl)
In this recent manuscript, McDermott documents his disillusion with
the `logicist' view of knowledge representation in AI, i.e., the idea
that the language of thought is something like first-order predicate
calculus, and---more especially---that processes of thought are
something like the drawing of valid conclusions from stored
assumptions. McDermott has been, in spite of his upbringing at MIT,
one of the vocal advocates of this point of view ( often identified
with Stanford in AI circles), so this volte-face is especially
interesting. His main thesis is the rejection of the claim that a
clear objective semantics for a representational language requires
that it be regarded as a logic, and process on it as inferences. His
examples are largely drawn from the literature on `qualitative' or
`naive' physics.
--------------
NEXT WEEK'S TINLUNCH
Stalnaker on the Semantics of Conditionals
Ch. 7 ``Conditional Propositions,'' Inquiry
by Robert Stalnaker
Discussion led by Chris Swoyer, University of Oklahoma (Swoyer@csli)
In this chapter Stalnaker presents his latest thoughts on a semantics
for conditionals (both subjunctive and indicative) and defends his
account against criticisms by David Lewis and others, focusing on such
topics as conditional excluded middle, `would' vs. `might'
conditionals, and Lewis' limit assumption.
------------
LOGIC SEMINAR
Maslov's Theory of Gentzen Type Systems
Prof. Vladimir Lifschitz, San Jose State University
4:15, Tuesday, May 20, Math. Dept. 383-N
-------
∂15-May-86 1704 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 17:03:17 PDT
Date: Thu 15 May 86 15:57:01-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 1
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
C S L I M O N T H L Y
-------------------------------------------------------------------------
May 1986 Vol. 1, No. 3
-------------------------------------------------------------------------
A monthly publication of the Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
------------------
CONTENTS
Modular Programming Language Semantics
(A Subproject of STASS)
by J. A. Goguen and J. Meseguer part 1
Project Reports part 2
Situation Theory and Situation Semantics (STASS)
by Jon Barwise part 2
Semantics of Computer Languages
by Terry Winograd part 3
Approaches to Computer Languages (LACL)
by Stuart Shieber and Hans Uszkoreit part 4
Lexical Project
by Annie Zaenen part 4
Phonology and Phonetics
by Paul Kiparsky part 5
Finite State Morphology (FSM)
by Lauri Karttunen part 6
Japanese Syntax Workshop part 6
CSLI Postdoctoral Fellows part 6
Jean Mark Gawron
Helene Kirchner
Ed Zalta
CSLI Snapshots: Lucy Suchman part 7
Giants Fold in Ninth; CSLI Presence Blamed
By our Special Correspondent part 7
------------------
MODULAR PROGRAMMING LANGUAGE SEMANTICS
(A Subproject of STASS)
J. A. Goguen and J. Meseguer
Some computations, such as evaluating a numerical function or
sorting a list, are "context independent", in the sense that just
their input determines the final result. By contrast, a query to a
database or to an airline reservation system involves computations
that can be best understood as "context dependent", in the sense that
the final result also depends on background information already
available to the computer system. This background information is
usually referred to as the "state" of the system, and it usually
appears as an implicit parameter in the computation. This distinction
provides a rough division of programming languages into two classes:
1) "Declarative" languages, which provide mainly context independent
computation
2) "Imperative" languages, where states are implicit and computation is
generally context dependent
[NOTE: Some recent work on unifying functional and object-oriented
programming seems to transcend this distinction, perhaps suggesting
that a somewhat different point of view should be taken.]
Giving formal semantics for declarative languages is generally simpler
than for imperative languages. This is because tools from traditional
mathematical logic apply directly to declarative languages. In fact,
most declarative languages have been designed with a particular
mathematical theory in mind. This includes "functional" languages
such as pure LISP (consisting essentially of recursive function
definitions) and OBJ (consisting of function definitions in
many-sorted equational logic), as well as "relational logic
programming languages" such as pure PROLOG (consisting of Horn clause
definitions of relations). More generally, we suggest the class of
"logical programming languages", whose programs consist of sentences
in some logical system, and whose computations are (reasonably
efficient forms of) deductions in that system; this class includes
both functional and relational languages, as well as their
unification, as in the language EQLOG that we have developed at CSLI.
[NOTE: The notion of a logical programming language can be made more
formal by using the notion of "institution" of Goguen and Burstall.]
By contrast, the semantics of imperative languages is necessarily
more complex, and has required the development of new tools. The
greatest achievements have been made using Scott-Strachey
"denotational semantics". In denotational semantics, the parameters
implicit in a computation are made explicit, and the denotation of a
program text is a higher order function belonging to a complex domain
of denotations. In a certain sense, this is similar to Montague's
approach to natural language semantics. In spite of denotational
semantics' great contributions, two important problems remain
unsolved:
1) Modularity of programming language features
2) Comprehensibility of semantic definitions
The first problem has to do with the meanings given to programming
language features, such as assignment, DO-WHILE, and procedure call;
we would like to give "once-and-for-all" definitions of such features
that can be used in the semantics of any language having that feature.
This contrasts with standard denotational definitions, which may
require providing extra levels of higher order functions (called
"continuations") when interactions with other features occur (for
example, adding GO-TO's or implicit backtracking to a language
previously lacking them will be likely to cause problems). Montague
semantics exhibits a similar lack of modularity, reflected in the need
to raise the level of the higher order functions that are the
denotations of the different language constituents when new ones are
added to the grammar; for instance, a function from individuals to
truth values might suffice as the denotation of the verb "run" in the
phrase "John Brown runs", but a function of a higher order would be
needed for "All men run".
The second problem has to do with the fact that denotational
definitions may involve hundreds of pages and be quite hard to read
even for specialists. Of course, lack of feature modularity is part
of the problem, since this makes semantic definitions nonreusable, so
that each feature has to be reconstructed in the context of each
different language. Another serious and closely related difficulty
comes from the fact that all the implicit parameters have to be made
explicit, so that definitions can become quite cluttered. Thus, for
realistic programming languages, it is very difficult to use
denotational techniques in formulating machine-independent standards
or in generating compilers directly from language definitions.
We hope to overcome these problems by using situation theory with an
explicit distinction between foreground and background. This
distinction, which is not made by denotational approaches, seems to be
crucial for overcoming the two problems mentioned above. The
advantage of situation theory is that it permits us to deal with
information in a context, so that information is made explicit only
relative to a background. This seems ideally suited to the semantics
of imperative languages, and it is even useful for specifying the
operational semantics of declarative languages, where one is
interested in how things are actually computed, and in problems
concerning the control of such computations. Another reason for being
interested in using situation theory in this way is that it permits a
relatively direct comparison between the semantics of natural
languages and programming languages; indeed, there seem to be some
important structural similarities, as well as some interesting
differences.
In this approach, which we are developing as part of the STASS Project
(see the STASS report in this issue), we conceive the semantics of
programming language features as "actions" that transform one
computational situation into another. A computational situation is
understood as having different components, such as control,
environment, store, and input-output. By splitting these components
apart, yet treating them within a single formalism, we attempt to
regain feature modularity. For example, a GO-TO can be seen as
affecting only the control situation, without affecting the rest of
the computational situation. In such an account, the addition of new
features need not change the semantic definitions of previous
features, although it may well introduce new and more complex
structures into the embedding situation. We have already studied
several common features of imperative programming languages, as well
as some control issues for logic programming, from this point of view.
Another part of this study is to develop a graphical notation both for
computational situations, and for the situation-theoretic axioms
involved in defining programming language features. As well as being
very intuitive (once it has become familiar), such a notation has the
advantage of being independent from any present or future variations
in situation theory, thus permitting our descriptive work to proceed
in parallel with the development of adequate theoretical foundations.
The graphical notation could also be used in the user interface of a
programming language design system based on situation theory. One can
envision such a system generating compilers from knowing what features
are to be provided, and what their syntax is.
Along somewhat more general lines, since we view the meaning of a
programming language feature as an action that transforms one
situation into another, this work has been providing the STASS group
with some examples and stimulus for a systematic theory of action in
situation theory.
---------------------
end of part 1 of 7
-------
∂15-May-86 1751 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 17:51:35 PDT
Date: Thu 15 May 86 16:06:26-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 2
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PROJECT REPORTS
SITUATION THEORY AND SITUATION SEMANTICS (STASS)
Jon Barwise
Project Participants: Curtis Abbott, Jon Barwise (Project Leader),
Brian Smith, John Etchemendy, Mark Gawron,
Joseph Goguen, Kris Halvorsen, David Israel,
John Perry, Jose Meseguer, Ken Olson,
Stanley Peters, Carl Pollard, Mats Rooth,
Susan Stucky, Dag Westerstahl
The STASS Project represents a coordinated effort both to develop a
unified theory of meaning and information content, and to apply that
theory to specific problems that have arisen within the various
disciplines represented within the group: AI, Computer Science,
Linguistics, Logic, and Philosophy. The guiding idea behind the
formation of this group was to use provisional versions of situation
theory to give detailed analyses of the semantics of natural and
computer languages. This serves both to hone intuitions about the
information theoretic structures required by such analyses, and to
provide criteria of adequacy for our theory of such structures. The
goal is to make these intuitions and criteria become precise enough to
provide the basis of a mathematically rigorous, axiomatic theory of
information content that could be used in all these various
disciplines.
The group has five subprojects: semantics of natural and computer
languages, situated inference, representation theory, and axiomatizing
situation theory. In the limited space available here, I will report
on only one small aspect of our work over the past few months, one
that cuts across the first two subprojects. Other aspects will be
discussed in future newsletters, as well as in CSLI Reports from the
group.
Researchers at CSLI recognize the many cross-cutting ways in which
implicit aspects of information-bearing structures interact to affect
meaning, content, and information flow. An adequate theory must
provide accounts of all these implicit aspects, and explain how they
interact. It is natural to provide at least partially independent
accounts (modules) for each of the related aspects. For example, in
the case of utterances, the syntax module would provide an account of
a system of interrelated grammatical features. In developing such
modules, it is important to remember that the full theory must relate
all the modules to each other and to the properties of the utterance.
Thus it is a crucial mistake to move from modularity to autonomy,
emphasizing economy and elegance at the level of the individual
module, disregarding the module's role in the theory as a whole. For
example, to restrict one's semantic devices to (unary) function
application (as in much of semantics) has led to unnecessary
complications in the description of the semantics of nonapplicative
programming languages, and in the semantics of, e.g., "wh-constructs"
in natural language.
An information theoretic perspective suggests a unified way of looking
at these modules and their interaction. Consider the following
well-known example.
(1) I saw a man out walking.
What are the facts? For one, we see that we can use this sentence to
describe two quite different kinds of situations, one in which the
reporter was out walking, the other in which the man seen was the
walker. Alongside this semantic ambiguity, there is arguably a
syntactic or structural ambiguity in the sentence: what the
prepositional phrase modifies, the verb "see" or the noun "man".
What matters, for the theorist, are various correlations. First,
note that which syntactic structure is appropriate is correlated with
which type of situation is described. Second, which pair is
appropriate, in a given case, would seem to depend on some fact about
the speaker -- i.e., about something else again, not grammatical
structure, not described situation, but the context in which the
utterance takes place. Thus we see that a feature of the context is
correlated with facts about both the described situation and the
grammatical situation.
Things can be more complicated yet. Notice, for example, that if
we read or hear (1), we will usually be able to tell which reading is
appropriate, not because the speaker tells us, but because of other
features of the background. Thus, for example, if the previous
discourse has already established that the reporter was looking out
his window, the second reading will be appropriate. Thus, in
appropriate circumstances, discourse features of the background of the
utterance are themselves correlated with grammatical structure, with
the described situation, and with facts about the speaker.
It is easy to ask the wrong question at this point: which of these
four comes first, which second, and so on? Which is most basic? On
the present perspective, one need not answer such questions. Rather,
one needs to do three things:
(i) describe each of the implicit aspects;
(ii) state the relationships among all four aspects; and
(iii) show how information about any one can give information about
others.
The last two requirements strongly suggest using a single descriptive
framework for all the theoretical modules needed in (i). This is of
course contrary to most current practice, in which very different
systems are used to describe different information-bearing aspects of
utterances (trees for syntax, model theoretic structures for
semantics, etc.). However, since the aspects treated by each module
are, in fact, structures containing information about the others, it
seems that a theory of information content should provide a uniform
method for describing these modules, and their interaction. We take
this as one criterion of adequacy of an information theoretic account
of language.
In providing a framework for developing grammatical modules under
this conception, situation theory has proven to be a useful tool.
First, as Stucky has argued, it is important to consider linguistic
representations (both theories and their notations) under
interpretation, i.e., viewing the representations themselves as
deserving of semantic interpretation in the world. This view not only
provides a clearer articulation of the relation between form, meaning,
and interpretation, and shows the way to a notion of representational
equivalence whereby different theories can be compared, it also
provides a mechanism for writing constraint-based linguistic
descriptions without losing the insights provided by older sequential
models. (Of course, it is to be expected that some facts may turn out
to be nonfacts under the new view, and that new facts will emerge.)
Stucky first shows how to interpret a linguistic formalism in
situation theoretic terms, in such a way as to free the linguistic
description from the sequential model. Then, using the tools for
stating constraints that situation theory provides, she develops a
fragment in which constraints hold among various subsystems of
grammar. The additional flexibility, she argues, will allow in
principle for previously intractable phenomena (such as the
constraints holding between the form of language and the discourse
domain) to be seriously investigated.
This perspective also helps to untangle a number of previously
difficult semantical issues. Consider problems of quantifier scope.
Several members of the group have been investigating how general noun
phrases (also called quantifiers) like "every philosopher", "no
chair", and "most linguists" are interpreted. A long-standing problem
connected with semantically interpreting such quantifiers in human
languages is that they are often ambiguous as to their "scope". For
instance, the sentence
(2) It's fun to play most of those games
can be used to make either the statement that playing a majority of
the games in question is enjoyable (possibly because of the
relationships between them, not because any one is fun to play), or
the statement that each one of more than half of the games is
enjoyable. The first statement corresponds to the "narrow scope"
interpretation of the quantifier "most of those games", and the second
to a "wide scope" interpretation. Similarly, there are two possible
interpretations of the sentence
(3) No one has read most of those books
according to whether "most of those books" has narrower or wider scope
than the quantifier "no one".
Previous approaches to the problem fall into two classes. The
largest class comprises those analyses that postulate distinct
syntactic structures or "logical forms" for different scopings of
quantifiers; each such structure is interpreted unambiguously, with
quantifiers taking particular, specified scopes. The other, smaller
class of analyses, assigns multiple semantic values to each phrase of
the sentence, some of these semantic values consisting of two parts: a
parametrized meaning for the phrase (one which depends on the value of
a "free" parameter), plus a collection of operators including one
which will bind that parameter.
On the view sketched above, understanding language involves seeing
how information flows back and forth between modules in the analysis
of particular utterances -- in recognizing the structure of the
sentence, facts about the speaker and background, and relating these
to information carried about a situation it describes. The problem of
obtaining multiple interpretations of utterances of sentences like (2)
and (3) appears in a different light, given this view. We believe
that such sentences have only one syntactic structure. The way in
which the single structure becomes associated with either of two
interpretations involves different ways in which information about the
context of utterance can flow through that structure. That is which
interpretation is appropriate is controlled by facts about the
speaker, including intentions, in the utterance situation. Assigning
the narrow scope interpretation to "most of those games" in (2) or
"most of those books" in (3) presents no particular problems on any
theory; information simply flows in one direction through the tree
structure, or its situation theoretic equivalent. However,
determining whether those quantifiers have scope over the predicate
"fun" or the quantifier "no one" requires, in effect, that information
about that predicate or quantifier flow "down the tree" to the
embedded verb phrase "play most of those games" or "read most of those
books" so that it is available for combining with information about
the interpretation of the verbs "play" or "read" before the result is
combined with the interpretation of the quantifier "most...". Once
the latter combination is performed, the result flows directly "up" to
the sentence as a whole. Thus the process of assigning a wide scope
interpretation to the quantifier "most..." does not provide an
interpretation for the embedded verb phrase. This consequence of our
analysis yields the striking prediction that a sentence such as
(4) Few people have read most of those books, but Bill has
can mean that Bill is among the few people who have read a majority of
the books in question, but cannot mean that most of the books have the
property that Bill and only a few other people have read them. Thus we
would seek to explain this fact and various related ones, which were
discovered by Ivan Sag and studied extensively by him in a very
different semantic framework.
A rather different sort of application comes from the issues in the
semantics of computer languages. Goguen and Meseguer have found the
ability to deal in a separate but equal way with foreground and
background to be crucial in their effort to achieve "feature
modularity" for programming language semantics. Feature modularity
allows the semantics of a feature, such as "assignment", to be given
once and for all; it does not need to be changed when a new feature,
such as "go-to" is later added to a language. By contrast, in
standard approaches, such as denotational semantics, the semantics
previously given to a feature may require drastic alteration when new
features are added.
From the present perspective, the problem with achieving modularity
in standard approaches stemmed from the attempt to have a single
module treating information about control, environment, store, and
output and defined over the syntactic modules where programs are
defined. Splitting these apart, but treating them with a single
descriptive system, promises to help us regain feature modularity.
Thus, for example, a "go-to" can be seen as affecting the control
situation, which is like the background situation in the natural
language case, and need have no effect whatsoever on any of the rest.
With such an account of a language, the addition of new features need
not change the semantic definitions of previous features, although it
may well introduce new and more complex structures in the embedding
situation. Several programming language features have already been
studied from this point of view, and a graphical language for feature
specification that is naturally associated to their situation
theoretic structures is being developed. This work has uncovered
interesting connections with a theory of action in situation
semantics, where actions are understood as transformations of
situations.
The formal machinery for expressing the relationships between these
different situations goes beyond the scope of this note. Suffice it
to say, for those familiar with the theory, that they all involve
constraints expressed in terms of relations between parametric states
of affairs. This perspective has been quite useful in thinking about
these problems, but the problems have also led to refinements and
enrichments of situation theory. So the group feels that the initial
motivation for the STASS project was a very sound one.
end of part 2 of 7
-------
∂15-May-86 1900 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 3
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 19:00:28 PDT
Date: Thu 15 May 86 16:07:32-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 3
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
SEMANTICS OF COMPUTER LANGUAGES
Terry Winograd
Project Participants: Mary Holstege, Jens Kaasboll, Birgit Landgrebe,
Kim Halskov Madsen, Kurt Normark, Liam Peyton,
Terry Winograd (Project Leader)
The goal of this project is to develop the theory of semantics of
computational languages, through the design of a specific family of
languages for system description and development. Our strategy can be
described as an interweaving of three levels: theory, experiments, and
environments.
A. Theory
Our goal is to develop theories that will serve as the basis for a
variety of constructed languages for describing, analyzing, and
designing real world situations and systems. A number of issues come
up in this context that have not been adequately dealt with, either in
work on natural language semantics or on the semantics of programming
languages. The following two examples illustrate the kind of analysis
we will develop through the experimental work described below.
1) Abstraction and approximation: In describing any complex real world
situation, people mix descriptions at different levels of abstraction
and detail. There are "higher-level" descriptions which in some way
or another simplify the account that is needed at a "lower" level.
There are a number of dimensions along which these simplifications are
made:
Generalization: Using more general categories (e.g., "tool") to
describe objects (or events or activities) that could be given more
precise classifications ("hammer", "saw", etc.)
Composition: Describing collections of objects (or activities,
etc.) in terms of composites without specifying their decomposition
into components. For example, describing the activity of a word
processor using elements like "update the screen" without giving
further detail of the components of the activity.
Idealization: Describing some kind of "standard" or "normal" case,
leaving out the details (or even whole dimensions) needed for some
actually occurring situations. Commonly, program descriptions
(even formal specifications) deal only with the cases in which
everything goes "normally" (e.g., no arithmetic overflows, user
interrupts, equipment malfunctions, etc., etc.)
Analogy: Describing one situation by giving a similar one (in some
dimensions) and specifying (explicitly or implicitly) a mapping from
one to the other.
This list could be extended further, but these are sufficient to get
the basic point. In working with practical languages (such as
programming and specification languages) there is a "semantic
discontinuity" in moving from one abstraction or approximation to
another. In the simple cases (e.g., straightforward composition)
there can be a clear mapping, but we do not yet have the theories to
deal adequately with more general cases.
We do not claim to be able to produce full answers to problems such
as analogy, but there is much that can be done in developing a
semantic theory adequate for dealing with these phenomena. In
particular, we want to develop a theory of the mapping between
different semantic domains and its relation to the "breakdowns" that
arise with respect to a given characterization (or "account").
The major accomplishment along theoretical lines during 1985-86 was
the completion and publication of a book coauthored by Winograd (with
Fernando Flores of Action Technologies and Logonet), entitled
Understanding Computers and Cognition. It lays a
theoretical/philosophical background for working on the more specific
questions of semantics, and served as a basis for discussions held in
conjunction with the Representation and Reasoning project. These
included the development of some concrete examples of representation
and idealization in computer systems (in particular, a
courses-and-grades system for a university registrar, and a simple
elevator controller). These and subsequent discussions led to the
writing of a paper on the foundations of representation, from the
perspective of the Winograd and Flores book (Winograd, forthcoming b),
which will be issued as a CSLI document later this summer, with hopes
of being refined (as a result of the ensuing discussion) for journal
publication. This paper and the material in the book will be the
basis for a CSLI seminar in May on "Why language isn't information".
2) Physical embodiment: Much of the work on computing languages has
dealt with the computer in a mathematical domain of inputs and
outputs, ignoring its embodiment as a physical process. This has been
a useful abstraction in many ways but, as the current interest in
issues of concurrency demonstrates, it is not adequate for many of the
phenomena of real computing (or of computational models for more
general physically embodied systems). In particular, there are
temporal, spatial, and causal constraints that can be described among
the components (in space and in time) of physical systems. To some
extent, these constraints can be reflected in the structure of
languages that are used to describe such systems. Research on system
modelling with Petri nets and related formalisms has attempted to make
the constraints explicit and precise. Work done in our group during
this year (including the completion of a dissertation by Kai-zhi Yue
[Yue, 1986]) has dealt with the use of such constraints in analyzing
the coherence of specifications of "stable operational systems" (see
Yue and Winograd, 1985).
B. Experiments
In exploring the general properties of situated formal languages, we
are focusing on the design and use of a class of languages called
"system description languages". These share some properties with
programming languages (especially in their overall structure and use
of language constructs) but have a semantics more in the tradition of
model theory and work on natural languages. That is, their meaning is
defined in terms of a correspondence with objects and activities in
the world, rather than through the operations and states of some
machine (or a mathematical abstraction of such a machine).
We have designed a first version of a language called ALEPH
(Winograd, forthcoming c), which has a semantics based on first-order
logic and a sequential interleaving model of discrete events. Work
this year has concentrated on writing up results. Current work
includes incorporating some insights that have developed through our
earlier fragmentary experiments (sketches of descriptions of real
systems), from Yue's dissertation work (which used a similar language
called DAO), and through our interactions with other researchers
looking at problems of system description.
In particular, during the winter quarter we organized a weekly
seminar on "System description and development", in which we looked at
concrete examples of computer systems and the way they are affected
by, and in turn shape, the language and representations used in a
concrete work setting such as a hospital ward. Speakers included Jens
Kaasboll and Kristen Nygaard, from Oslo University (Norway) and Kim
Halskov Madsen, from Aarhus University (Denmark), all of whom have
actively worked on projects relating theoretical computational models
and specification languages to specific system development settings.
Kaasboll and Madsen each visited for several months and are developing
papers relating their earlier work to the perspective on language they
encountered at CSLI (Kaasboll, forthcoming; Madsen, forthcoming). One
of the major results of the seminar will be a paper on "A language
perspective on the design of cooperative work", being prepared for a
conference this December (Winograd, forthcoming b).
C. Environments
As a basis for experimenting with system description languages, we
have been developing an environment, called MUIR, which is a toolkit
for designing and working with formal languages (Winograd, forthcoming
a; Winograd, 1986; Normark, 1986). It allows the experimenter to
specify a language in an extended formalism that includes the
information in ordinary constituent structure rules (expressed in a
hierarchical structure that allows sharing of information about
related forms) and also allows for "gateways" (terminal symbols that
specify another grammar and a nonterminal within it for further
expansion), declared and dynamic "properties" (which can be associated
with a node but are not part of its basic structure), and multiple
"presentation rules" (which map the structure onto some visible
presentation, such as a structured piece of text or a set of graph
nodes and links). The environment provides structure-driven text
editors and graph editors, which use the information in the grammar to
present "texts" (in an extended sense) written in the language and to
provide a variety of operations determined by the language
specification. It is based on a representation of structure in
Abstract Syntax Trees (AST) and specification of languages using a
uniform Meta-Grammar.
In addition to providing for language-specific (grammar-driven)
editing and presentation, MUIR provides an overall structure in which
to integrate a variety of language manipulation tools, such as
translators (or "transformers" in general), consistency and coherence
checkers, interpreters, deductive analyzers, etc. The ASTs provide a
uniform format for linguistic structures, and the editors are the
basis for interfaces to all aspects of the system. We have designed
MUIR with our own language designs in mind, but have tried to maintain
a good deal of generality. In fact, we plan to use it to implement
"grammars" of things such as the structure of text files in a language
manual, and conversations and messages in a message system. Although
we do not see the development of this environment as a primary goal,
we believe that it will be general enough and well enough worked out
to be of use to other CSLI researchers (it is implemented in
Interlisp-D).
Our development of MUIR was aided by the discussions in a weekly
seminar on environments that we held in the fall quarter. It was
attended by a number of people from Stanford and local industry, and
discussed the theoretical issues that must be addressed in
computer-based environments of all kinds, including programming
environments, specification environments, design environments (e.g.,
for VLSI design) and text-preparation environments. One key part of
the work was a collaboration with David Levy, relating our concerns to
the theories he is developing in the Analysis of Graphical
Representation group. The work of the long-term visitors (Ole
Lehrmann Madsen during the first year, Kurt Normark since then) has
been extremely useful in formulating the direction and doing
preliminary implementations of the environment.
Over the coming months we see the emphasis of our work as shifting
back from the environment to the design of ALEPH, and to experiments
with it, along with continuing development of the theoretical basis.
References
Publications
Normark, K. 1986. Transformations and Edit Operations in MUIR,
submitted to SIGPLAN/SIGSOFT Conference on Practical Programming
Environments.
Winograd, T. 1986. Hierarchical Grammar as the Basis for a Language
Development Environment, submitted to SIGPLAN/SIGSOFT Conference on
Practical Programming Environments.
Winograd, T. and Flores, F. 1986. Understanding Computers and
Cognition. Norwood, N.J.: Ablex.
Yue, K. 1985. Constructing and Analyzing Specifications of Real World
Systems. PhD Dissertation, Department of Computer Science, Stanford
(to be issued as a CS Report in 1986).
Yue, K. and Winograd, T. 1985. Creating Analyzable System
Descriptions. Proceedings of the Hawaii International Conference on
System Sciences, 476-485.
In preparation:
Kaasboll, J. On the Nature of Using Computers.
Madsen, K. H. Breakthrough through Breakdown: Structured Domains,
Metaphors, and Frames.
Normark, K. Papers on Transformation.
Winograd, T. (a). MUIR: A Language Development Environment.
Winograd, T. (b). Representational Accounts.
Winograd, T. (c). ALEPH: A System Specification Language.
Winograd, T. (d). A Language Perspective on the Design of Cooperative
Work, to be submitted to MCC/MIT conference on Computer-supported
Cooperative Work, 1986.
end of part 3 of 7
-------
∂15-May-86 2019 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:18:54 PDT
Date: Thu 15 May 86 16:09:45-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
APPROACHES TO COMPUTER LANGUAGES (LACL)
Stuart Shieber and Hans Uszkoreit
Project Participants: Mary Holstege, Stuart Shieber, Hans Uszkoreit
(Project Leader)
The increasing complexity of computer languages (CLs), current
progress in formal linguistics, and the growing importance of
ergonomic factors in CL design is leading to the emergence of a new
field of research between computer science and linguistics.
LACL is a pilot project to investigate the application of methods
and findings from research on natural languages (NLs) to the design
and description of high-level CLs. The linguistically interesting way
to make CLs resemble NLs is not to simply graft English words or
phrases onto the computer language in a superficial way, as is the
common practice in current CL design, but rather to exploit the rich
inventory of encoding strategies that have developed during the
evolution of NLs and to which humans appear especially attuned.
Currently the LACL group is investigating the need for, and
feasibility of, applying linguistic approaches, techniques, and
findings to a set of sample problems. One of these is the use of
partially free word order among the arguments of functions to allow
flexibility in the order of evaluation and to eliminate the need for
the user to memorize arbitrary argument orders. This requires
disambiguation by sort, type, or special marking. In the paragraphs
below this problem serves as an example of the approach to be taken.
All known human languages have some degree of word order freedom.
That means that in every language there are sets of sentences that
have the same content and differ only in the order of their elements.
We call this phenomenon "permutational variation". Although the
permutational variants always share the same truth-conditional
content, they might differ in their conversational meaning, that is,
not all variants might be equally appropriate in the same situational
and textual contexts.
For the application of our research results on permutational
variation to CLs, we selected an area in which permutational variation
has actually already entered existing programming languages: the order
of the arguments of functions (commands, predicates). Functions with
more than one argument in a programming language correspond roughly to
verbs in NLs. In NLs which allow permutation of arguments --
subjects, direct objects, and indirect objects, for instance -- the
arguments can usually be distinguished through some morphological or
lexical markings such as affixes (e.g., case marking) or particles
that accompany the argument (e.g., prepositions or infinitival
markers). Other NLs, however, require that their arguments occur in a
fixed order.
Until recently, the fixed order approach was the only strategy for
designating the arguments of functions in programming languages.
However, certain more recent languages (like ADA, MESA, or ZETALISP)
provide a concept called "keyword" parameters in function calls or
record construction. The function of keywords is not much different
from the function of argument marking in NLs. In fact, some of the
individual implementations of the concept resemble strategies used in
NLs in quite an astounding way. This is especially true for the
mixture of free and fixed argument order. There is no indication that
the designers of the languages have been guided by linguistic
knowledge about these strategies, it just happened that the techniques
were determined to be useful on a rather pretheoretic level.
The use of keywords for marking arguments has been recently
disputed by Richard O'Keefe (1985). O'Keefe suggests that type
checking might be a better strategy to distinguish freely ordered
arguments. However, there is no reason to assume that a choice of a
single principle has to be made. In fact, NLs employ a number of
different strategies in parallel that complement each other in many
ways. It is often a mixture of word order, case marking, semantic, and
pragmatic information that designates the proper argument assignment.
Although there is no need to burden CLs with unnecessary complexity,
the optimal mix of strategies for argument designation needs to be
decided in a theoretically sound way, using all available knowledge
about encoding systems that have proven useful and appropriate in the
languages which are natural for humans.
Other sample problems for our research are:
o The exploitation of parallels between NL descriptions based on
complex structured information (such as f-structures or complex
categories) and type inference in CLs that allow partial
(so-called polymorphic) types.
Current linguistic theories make heavy use of notions of
partial information and identity constraints on information
which lead to a reliance on unification as a technique
for solving these systems of linguistic constraints. Earlier
independent developments in the theory of programming languages
led to the use of unification as a technique for solving type
constraints in typed programming languages. A rich analogy can be
developed along these lines between NL parsing and CL type inference,
which has the potential to contribute to both fields.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
Inheritance of data types found in object-oriented
programming languages have counterparts as tools for structuring
lexicons in NL systems. The technology of such systems
developed for NL processing might serve to help explicate
the corresponding programming constructs and vice versa.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms (pronouns, ellipsis)
in CLs than currently exist.
Long-term issues in the emerging new field that go beyond the scope of
the pilot project include:
o Temporal expressions in the communication among parallel processes.
o The use of speech acts in message passing between objects and
processors.
o The utilization of discourse information to support ellipsis.
References
O'Keefe, R. 1985. Alternatives to Keyword Parameters. SIGPLAN Notices,
June.
LEXICAL PROJECT
Annie Zaenen
Project Participants: Mark Gawron, Kris Halvorsen, Lauri
Karttunen, Martin Kay, Paul Kiparsky,
Mats Rooth, Hans Uszkoreit, Tom Wasow,
Meg Withgott, Annie Zaenen (Project Leader)
The ultimate aim of the Lexical project is to build a lexicon that
is compatible with and useful to the other components of natural
language systems at CSLI. To implement it, we will begin with an
existing on-line dictionary and transform it progressively into what
we need.
Our basic hypothesis is that the syntactic and semantic frameworks
elaborated at CSLI are similar enough for it to be worthwhile to
construct a common lexicon encoded in a form that translates easily
into the different formalisms, rather than to have totally different
encodings for each framework. Given that dictionaries are typically
large, and will, even in the best case, contain more idiosyncratic
information than most components of a natural language system, this is
the only realistic way to proceed.
A lexicon contains morphological, phonological, syntactic, and
semantic information. For our first year of activity we decided to
focus on the syntactic and semantic aspects; the phonological and
morphological sides are better understood, and we assumed it would be
easier in those domains to extract the needed information out of
information already given in existing dictionaries.
In the past months we have investigated what kind of information
should be available to allow syntactic generalizations to be captured.
We started with the syntactic side because we wanted to take advantage
of Kiparsky's current work on this topic and of the presence of Mark
Gawron, a postdoctoral fellow at the center, who has already done
substantial work in this area. Traditionally, generative grammar
presupposes information about syntactic category and
"subcategorization". Our investigation has centered on the role of
thematic information about the arguments of verbs, that is, on the
usefulness of notions like "agent", "source", "theme". This
information is necessary if one wants to capture subregularities like
the relation between the uses of "hit" in "He hit the stick against
the fence" and "He hit the fence with a stick". In the following I
will summarize a few leading ideas that have been established and the
direction that the research is taking.
1. The syntactic behavior of the arguments of predicates is
ultimately based on the meaning of the predicates; hence, an
insightful account should be grounded in semantics. However, it is
useful to pursue the investigation both from the semantic and the
syntactic point of view, as the syntax is the best guide we have at
the moment to linguistically significant generalizations.
2. It is useful to establish equivalence classes that abstract away
from some of the meaning distinctions; for example, the first argument
of the verb "kick" (i.e., the kicker) and that of the verb "kiss"
(i.e., the kisser) have more in common than the first argument of
"kiss" and that of the verb "please" (i.e., the one who is pleased).
How these equivalence classes have to be established is an empirical
question. Representationally there are different ways of addressing
the problem; for example, by having features like "+agentive", by
having roles like "agent", or by having higher predicates like "do"
and "change" whose arguments have by definition the characteristics of
an agent, a theme, etc. Uszkoreit and Zaenen take the latter approach
in the model they are developing, but the technology needed to
implement any of these representations seems to be quite similar.
3. The mapping from thematic information onto syntactic categories is
at least partially hierarchical. For example, a subject cannot be
identified with an agent, a theme, or an experiencer until one knows
the complete set of arguments that a verb takes. But given the
thematic information, large classes of verbs behave in the same way;
for example for some verbs, if there is an agent, it will be the
subject (except in the passive form, for which an independent regular
mapping can be defined).
4. It is possible to represent lexical semantic and syntactic
information using the same kind of constraint-based formalism as is
used in other areas of linguistic modelling at CSLI. (See Fenstad,
Halvorsen, Langholm, and van Benthem, 1985, for the most extensive
discussion of the general ideas.)
5. The information about verb meaning, thematic argument classes, and
the mapping onto the lexical syntax can by and large be encoded using
computational tools already developed in connection with the PATR
project at SRI. They are included in Karttunen's D-PATR grammar
development system that is available at CSLI. This system allows the
grammar writer to use default values which can be changed by later
specifications and lexical rules to transform feature sets in even
more radical ways. For a full description of the system, see "D-PATR:
A Development System for Unification-based Grammar Formalisms" (to
appear as a CSLI Report). While the PATR system is useful, it needs
to be further developed. Disjunction and negation must be available
in the description of lexical entries, and it should also be possible
to assign set values to attributes.
6. Among the more basic theoretical questions remains that of
monotonicity. With overwriting and lexical rules, the specifications
of lexical entries are order-dependent, and thus the system as a whole
does not have the property of monotonicity that is felt to be
desirable in other areas of grammar. The reasons and consequences of
this situation have yet to be addressed in the overall context of
grammar.
Thinking about the lexicon as a part that has to be integrated in a
larger whole has the following advantages:
o The available syntactic theories delimit what needs
to be said in the lexicon. For example, when we are
able to state that a particular argument will be the
first syntactic argument of a certain verb, we feel
confident that our job is done, whether this argument
will then be treated as a "subject" in LFG, the
"last thing on the subcat list" in HPSG, or the "thing
that the verb will agree with" (in the simple case)
in Kiparsky's theory.
o The syntactic theories also push us to make distinctions
that tend to be overlooked in more independent approaches,
for instance the thematic information mentioned above in
(2) and (3).
o The computational tools get a new testing ground, and one
can discuss in a concrete way how the encoding of lexical
information compares to that of other linguistic information.
o An important question is the possibility of finding a
way to define words in terms of unanalyzed notions like
change, cause, and intention that can then feed into/be
fed by semantic theories in which these notions are
interpreted. If such a system can be developed,
we will have a lexicon that both on the syntactic and on
the semantic side is compatible with more than one theory.
In the next few months we will tackle that problem by
trying to determine how our view on lexical semantics fits
in with the semantics developed in STASS and AFL.
By trying to be compatible with syntactic and semantic proposals, we
expect to get a better idea about the place of the lexicon in
linguistic description than would be forthcoming from a study in which
the lexicon is seen as independent.
References
Fenstad, J. E., Halvorsen, P.-K., Langholm, T., and van Benthem, J.
1985. Equations, Schemata, and Situations: A Framework for Linguistic
Semantics. Report No. CSLI-85-29.
end of part 4 of 7
-------
∂15-May-86 2024 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:18:54 PDT
Date: Thu 15 May 86 16:09:45-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
APPROACHES TO COMPUTER LANGUAGES (LACL)
Stuart Shieber and Hans Uszkoreit
Project Participants: Mary Holstege, Stuart Shieber, Hans Uszkoreit
(Project Leader)
The increasing complexity of computer languages (CLs), current
progress in formal linguistics, and the growing importance of
ergonomic factors in CL design is leading to the emergence of a new
field of research between computer science and linguistics.
LACL is a pilot project to investigate the application of methods
and findings from research on natural languages (NLs) to the design
and description of high-level CLs. The linguistically interesting way
to make CLs resemble NLs is not to simply graft English words or
phrases onto the computer language in a superficial way, as is the
common practice in current CL design, but rather to exploit the rich
inventory of encoding strategies that have developed during the
evolution of NLs and to which humans appear especially attuned.
Currently the LACL group is investigating the need for, and
feasibility of, applying linguistic approaches, techniques, and
findings to a set of sample problems. One of these is the use of
partially free word order among the arguments of functions to allow
flexibility in the order of evaluation and to eliminate the need for
the user to memorize arbitrary argument orders. This requires
disambiguation by sort, type, or special marking. In the paragraphs
below this problem serves as an example of the approach to be taken.
All known human languages have some degree of word order freedom.
That means that in every language there are sets of sentences that
have the same content and differ only in the order of their elements.
We call this phenomenon "permutational variation". Although the
permutational variants always share the same truth-conditional
content, they might differ in their conversational meaning, that is,
not all variants might be equally appropriate in the same situational
and textual contexts.
For the application of our research results on permutational
variation to CLs, we selected an area in which permutational variation
has actually already entered existing programming languages: the order
of the arguments of functions (commands, predicates). Functions with
more than one argument in a programming language correspond roughly to
verbs in NLs. In NLs which allow permutation of arguments --
subjects, direct objects, and indirect objects, for instance -- the
arguments can usually be distinguished through some morphological or
lexical markings such as affixes (e.g., case marking) or particles
that accompany the argument (e.g., prepositions or infinitival
markers). Other NLs, however, require that their arguments occur in a
fixed order.
Until recently, the fixed order approach was the only strategy for
designating the arguments of functions in programming languages.
However, certain more recent languages (like ADA, MESA, or ZETALISP)
provide a concept called "keyword" parameters in function calls or
record construction. The function of keywords is not much different
from the function of argument marking in NLs. In fact, some of the
individual implementations of the concept resemble strategies used in
NLs in quite an astounding way. This is especially true for the
mixture of free and fixed argument order. There is no indication that
the designers of the languages have been guided by linguistic
knowledge about these strategies, it just happened that the techniques
were determined to be useful on a rather pretheoretic level.
The use of keywords for marking arguments has been recently
disputed by Richard O'Keefe (1985). O'Keefe suggests that type
checking might be a better strategy to distinguish freely ordered
arguments. However, there is no reason to assume that a choice of a
single principle has to be made. In fact, NLs employ a number of
different strategies in parallel that complement each other in many
ways. It is often a mixture of word order, case marking, semantic, and
pragmatic information that designates the proper argument assignment.
Although there is no need to burden CLs with unnecessary complexity,
the optimal mix of strategies for argument designation needs to be
decided in a theoretically sound way, using all available knowledge
about encoding systems that have proven useful and appropriate in the
languages which are natural for humans.
Other sample problems for our research are:
o The exploitation of parallels between NL descriptions based on
complex structured information (such as f-structures or complex
categories) and type inference in CLs that allow partial
(so-called polymorphic) types.
Current linguistic theories make heavy use of notions of
partial information and identity constraints on information
which lead to a reliance on unification as a technique
for solving these systems of linguistic constraints. Earlier
independent developments in the theory of programming languages
led to the use of unification as a technique for solving type
constraints in typed programming languages. A rich analogy can be
developed along these lines between NL parsing and CL type inference,
which has the potential to contribute to both fields.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
Inheritance of data types found in object-oriented
programming languages have counterparts as tools for structuring
lexicons in NL systems. The technology of such systems
developed for NL processing might serve to help explicate
the corresponding programming constructs and vice versa.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms (pronouns, ellipsis)
in CLs than currently exist.
Long-term issues in the emerging new field that go beyond the scope of
the pilot project include:
o Temporal expressions in the communication among parallel processes.
o The use of speech acts in message passing between objects and
processors.
o The utilization of discourse information to support ellipsis.
References
O'Keefe, R. 1985. Alternatives to Keyword Parameters. SIGPLAN Notices,
June.
LEXICAL PROJECT
Annie Zaenen
Project Participants: Mark Gawron, Kris Halvorsen, Lauri
Karttunen, Martin Kay, Paul Kiparsky,
Mats Rooth, Hans Uszkoreit, Tom Wasow,
Meg Withgott, Annie Zaenen (Project Leader)
The ultimate aim of the Lexical project is to build a lexicon that
is compatible with and useful to the other components of natural
language systems at CSLI. To implement it, we will begin with an
existing on-line dictionary and transform it progressively into what
we need.
Our basic hypothesis is that the syntactic and semantic frameworks
elaborated at CSLI are similar enough for it to be worthwhile to
construct a common lexicon encoded in a form that translates easily
into the different formalisms, rather than to have totally different
encodings for each framework. Given that dictionaries are typically
large, and will, even in the best case, contain more idiosyncratic
information than most components of a natural language system, this is
the only realistic way to proceed.
A lexicon contains morphological, phonological, syntactic, and
semantic information. For our first year of activity we decided to
focus on the syntactic and semantic aspects; the phonological and
morphological sides are better understood, and we assumed it would be
easier in those domains to extract the needed information out of
information already given in existing dictionaries.
In the past months we have investigated what kind of information
should be available to allow syntactic generalizations to be captured.
We started with the syntactic side because we wanted to take advantage
of Kiparsky's current work on this topic and of the presence of Mark
Gawron, a postdoctoral fellow at the center, who has already done
substantial work in this area. Traditionally, generative grammar
presupposes information about syntactic category and
"subcategorization". Our investigation has centered on the role of
thematic information about the arguments of verbs, that is, on the
usefulness of notions like "agent", "source", "theme". This
information is necessary if one wants to capture subregularities like
the relation between the uses of "hit" in "He hit the stick against
the fence" and "He hit the fence with a stick". In the following I
will summarize a few leading ideas that have been established and the
direction that the research is taking.
1. The syntactic behavior of the arguments of predicates is
ultimately based on the meaning of the predicates; hence, an
insightful account should be grounded in semantics. However, it is
useful to pursue the investigation both from the semantic and the
syntactic point of view, as the syntax is the best guide we have at
the moment to linguistically significant generalizations.
2. It is useful to establish equivalence classes that abstract away
from some of the meaning distinctions; for example, the first argument
of the verb "kick" (i.e., the kicker) and that of the verb "kiss"
(i.e., the kisser) have more in common than the first argument of
"kiss" and that of the verb "please" (i.e., the one who is pleased).
How these equivalence classes have to be established is an empirical
question. Representationally there are different ways of addressing
the problem; for example, by having features like "+agentive", by
having roles like "agent", or by having higher predicates like "do"
and "change" whose arguments have by definition the characteristics of
an agent, a theme, etc. Uszkoreit and Zaenen take the latter approach
in the model they are developing, but the technology needed to
implement any of these representations seems to be quite similar.
3. The mapping from thematic information onto syntactic categories is
at least partially hierarchical. For example, a subject cannot be
identified with an agent, a theme, or an experiencer until one knows
the complete set of arguments that a verb takes. But given the
thematic information, large classes of verbs behave in the same way;
for example for some verbs, if there is an agent, it will be the
subject (except in the passive form, for which an independent regular
mapping can be defined).
4. It is possible to represent lexical semantic and syntactic
information using the same kind of constraint-based formalism as is
used in other areas of linguistic modelling at CSLI. (See Fenstad,
Halvorsen, Langholm, and van Benthem, 1985, for the most extensive
discussion of the general ideas.)
5. The information about verb meaning, thematic argument classes, and
the mapping onto the lexical syntax can by and large be encoded using
computational tools already developed in connection with the PATR
project at SRI. They are included in Karttunen's D-PATR grammar
development system that is available at CSLI. This system allows the
grammar writer to use default values which can be changed by later
specifications and lexical rules to transform feature sets in even
more radical ways. For a full description of the system, see "D-PATR:
A Development System for Unification-based Grammar Formalisms" (to
appear as a CSLI Report). While the PATR system is useful, it needs
to be further developed. Disjunction and negation must be available
in the description of lexical entries, and it should also be possible
to assign set values to attributes.
6. Among the more basic theoretical questions remains that of
monotonicity. With overwriting and lexical rules, the specifications
of lexical entries are order-dependent, and thus the system as a whole
does not have the property of monotonicity that is felt to be
desirable in other areas of grammar. The reasons and consequences of
this situation have yet to be addressed in the overall context of
grammar.
Thinking about the lexicon as a part that has to be integrated in a
larger whole has the following advantages:
o The available syntactic theories delimit what needs
to be said in the lexicon. For example, when we are
able to state that a particular argument will be the
first syntactic argument of a certain verb, we feel
confident that our job is done, whether this argument
will then be treated as a "subject" in LFG, the
"last thing on the subcat list" in HPSG, or the "thing
that the verb will agree with" (in the simple case)
in Kiparsky's theory.
o The syntactic theories also push us to make distinctions
that tend to be overlooked in more independent approaches,
for instance the thematic information mentioned above in
(2) and (3).
o The computational tools get a new testing ground, and one
can discuss in a concrete way how the encoding of lexical
information compares to that of other linguistic information.
o An important question is the possibility of finding a
way to define words in terms of unanalyzed notions like
change, cause, and intention that can then feed into/be
fed by semantic theories in which these notions are
interpreted. If such a system can be developed,
we will have a lexicon that both on the syntactic and on
the semantic side is compatible with more than one theory.
In the next few months we will tackle that problem by
trying to determine how our view on lexical semantics fits
in with the semantics developed in STASS and AFL.
By trying to be compatible with syntactic and semantic proposals, we
expect to get a better idea about the place of the lexicon in
linguistic description than would be forthcoming from a study in which
the lexicon is seen as independent.
References
Fenstad, J. E., Halvorsen, P.-K., Langholm, T., and van Benthem, J.
1985. Equations, Schemata, and Situations: A Framework for Linguistic
Semantics. Report No. CSLI-85-29.
end of part 4 of 7
-------
∂15-May-86 2029 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:18:54 PDT
Date: Thu 15 May 86 16:09:45-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
APPROACHES TO COMPUTER LANGUAGES (LACL)
Stuart Shieber and Hans Uszkoreit
Project Participants: Mary Holstege, Stuart Shieber, Hans Uszkoreit
(Project Leader)
The increasing complexity of computer languages (CLs), current
progress in formal linguistics, and the growing importance of
ergonomic factors in CL design is leading to the emergence of a new
field of research between computer science and linguistics.
LACL is a pilot project to investigate the application of methods
and findings from research on natural languages (NLs) to the design
and description of high-level CLs. The linguistically interesting way
to make CLs resemble NLs is not to simply graft English words or
phrases onto the computer language in a superficial way, as is the
common practice in current CL design, but rather to exploit the rich
inventory of encoding strategies that have developed during the
evolution of NLs and to which humans appear especially attuned.
Currently the LACL group is investigating the need for, and
feasibility of, applying linguistic approaches, techniques, and
findings to a set of sample problems. One of these is the use of
partially free word order among the arguments of functions to allow
flexibility in the order of evaluation and to eliminate the need for
the user to memorize arbitrary argument orders. This requires
disambiguation by sort, type, or special marking. In the paragraphs
below this problem serves as an example of the approach to be taken.
All known human languages have some degree of word order freedom.
That means that in every language there are sets of sentences that
have the same content and differ only in the order of their elements.
We call this phenomenon "permutational variation". Although the
permutational variants always share the same truth-conditional
content, they might differ in their conversational meaning, that is,
not all variants might be equally appropriate in the same situational
and textual contexts.
For the application of our research results on permutational
variation to CLs, we selected an area in which permutational variation
has actually already entered existing programming languages: the order
of the arguments of functions (commands, predicates). Functions with
more than one argument in a programming language correspond roughly to
verbs in NLs. In NLs which allow permutation of arguments --
subjects, direct objects, and indirect objects, for instance -- the
arguments can usually be distinguished through some morphological or
lexical markings such as affixes (e.g., case marking) or particles
that accompany the argument (e.g., prepositions or infinitival
markers). Other NLs, however, require that their arguments occur in a
fixed order.
Until recently, the fixed order approach was the only strategy for
designating the arguments of functions in programming languages.
However, certain more recent languages (like ADA, MESA, or ZETALISP)
provide a concept called "keyword" parameters in function calls or
record construction. The function of keywords is not much different
from the function of argument marking in NLs. In fact, some of the
individual implementations of the concept resemble strategies used in
NLs in quite an astounding way. This is especially true for the
mixture of free and fixed argument order. There is no indication that
the designers of the languages have been guided by linguistic
knowledge about these strategies, it just happened that the techniques
were determined to be useful on a rather pretheoretic level.
The use of keywords for marking arguments has been recently
disputed by Richard O'Keefe (1985). O'Keefe suggests that type
checking might be a better strategy to distinguish freely ordered
arguments. However, there is no reason to assume that a choice of a
single principle has to be made. In fact, NLs employ a number of
different strategies in parallel that complement each other in many
ways. It is often a mixture of word order, case marking, semantic, and
pragmatic information that designates the proper argument assignment.
Although there is no need to burden CLs with unnecessary complexity,
the optimal mix of strategies for argument designation needs to be
decided in a theoretically sound way, using all available knowledge
about encoding systems that have proven useful and appropriate in the
languages which are natural for humans.
Other sample problems for our research are:
o The exploitation of parallels between NL descriptions based on
complex structured information (such as f-structures or complex
categories) and type inference in CLs that allow partial
(so-called polymorphic) types.
Current linguistic theories make heavy use of notions of
partial information and identity constraints on information
which lead to a reliance on unification as a technique
for solving these systems of linguistic constraints. Earlier
independent developments in the theory of programming languages
led to the use of unification as a technique for solving type
constraints in typed programming languages. A rich analogy can be
developed along these lines between NL parsing and CL type inference,
which has the potential to contribute to both fields.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
Inheritance of data types found in object-oriented
programming languages have counterparts as tools for structuring
lexicons in NL systems. The technology of such systems
developed for NL processing might serve to help explicate
the corresponding programming constructs and vice versa.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms (pronouns, ellipsis)
in CLs than currently exist.
Long-term issues in the emerging new field that go beyond the scope of
the pilot project include:
o Temporal expressions in the communication among parallel processes.
o The use of speech acts in message passing between objects and
processors.
o The utilization of discourse information to support ellipsis.
References
O'Keefe, R. 1985. Alternatives to Keyword Parameters. SIGPLAN Notices,
June.
LEXICAL PROJECT
Annie Zaenen
Project Participants: Mark Gawron, Kris Halvorsen, Lauri
Karttunen, Martin Kay, Paul Kiparsky,
Mats Rooth, Hans Uszkoreit, Tom Wasow,
Meg Withgott, Annie Zaenen (Project Leader)
The ultimate aim of the Lexical project is to build a lexicon that
is compatible with and useful to the other components of natural
language systems at CSLI. To implement it, we will begin with an
existing on-line dictionary and transform it progressively into what
we need.
Our basic hypothesis is that the syntactic and semantic frameworks
elaborated at CSLI are similar enough for it to be worthwhile to
construct a common lexicon encoded in a form that translates easily
into the different formalisms, rather than to have totally different
encodings for each framework. Given that dictionaries are typically
large, and will, even in the best case, contain more idiosyncratic
information than most components of a natural language system, this is
the only realistic way to proceed.
A lexicon contains morphological, phonological, syntactic, and
semantic information. For our first year of activity we decided to
focus on the syntactic and semantic aspects; the phonological and
morphological sides are better understood, and we assumed it would be
easier in those domains to extract the needed information out of
information already given in existing dictionaries.
In the past months we have investigated what kind of information
should be available to allow syntactic generalizations to be captured.
We started with the syntactic side because we wanted to take advantage
of Kiparsky's current work on this topic and of the presence of Mark
Gawron, a postdoctoral fellow at the center, who has already done
substantial work in this area. Traditionally, generative grammar
presupposes information about syntactic category and
"subcategorization". Our investigation has centered on the role of
thematic information about the arguments of verbs, that is, on the
usefulness of notions like "agent", "source", "theme". This
information is necessary if one wants to capture subregularities like
the relation between the uses of "hit" in "He hit the stick against
the fence" and "He hit the fence with a stick". In the following I
will summarize a few leading ideas that have been established and the
direction that the research is taking.
1. The syntactic behavior of the arguments of predicates is
ultimately based on the meaning of the predicates; hence, an
insightful account should be grounded in semantics. However, it is
useful to pursue the investigation both from the semantic and the
syntactic point of view, as the syntax is the best guide we have at
the moment to linguistically significant generalizations.
2. It is useful to establish equivalence classes that abstract away
from some of the meaning distinctions; for example, the first argument
of the verb "kick" (i.e., the kicker) and that of the verb "kiss"
(i.e., the kisser) have more in common than the first argument of
"kiss" and that of the verb "please" (i.e., the one who is pleased).
How these equivalence classes have to be established is an empirical
question. Representationally there are different ways of addressing
the problem; for example, by having features like "+agentive", by
having roles like "agent", or by having higher predicates like "do"
and "change" whose arguments have by definition the characteristics of
an agent, a theme, etc. Uszkoreit and Zaenen take the latter approach
in the model they are developing, but the technology needed to
implement any of these representations seems to be quite similar.
3. The mapping from thematic information onto syntactic categories is
at least partially hierarchical. For example, a subject cannot be
identified with an agent, a theme, or an experiencer until one knows
the complete set of arguments that a verb takes. But given the
thematic information, large classes of verbs behave in the same way;
for example for some verbs, if there is an agent, it will be the
subject (except in the passive form, for which an independent regular
mapping can be defined).
4. It is possible to represent lexical semantic and syntactic
information using the same kind of constraint-based formalism as is
used in other areas of linguistic modelling at CSLI. (See Fenstad,
Halvorsen, Langholm, and van Benthem, 1985, for the most extensive
discussion of the general ideas.)
5. The information about verb meaning, thematic argument classes, and
the mapping onto the lexical syntax can by and large be encoded using
computational tools already developed in connection with the PATR
project at SRI. They are included in Karttunen's D-PATR grammar
development system that is available at CSLI. This system allows the
grammar writer to use default values which can be changed by later
specifications and lexical rules to transform feature sets in even
more radical ways. For a full description of the system, see "D-PATR:
A Development System for Unification-based Grammar Formalisms" (to
appear as a CSLI Report). While the PATR system is useful, it needs
to be further developed. Disjunction and negation must be available
in the description of lexical entries, and it should also be possible
to assign set values to attributes.
6. Among the more basic theoretical questions remains that of
monotonicity. With overwriting and lexical rules, the specifications
of lexical entries are order-dependent, and thus the system as a whole
does not have the property of monotonicity that is felt to be
desirable in other areas of grammar. The reasons and consequences of
this situation have yet to be addressed in the overall context of
grammar.
Thinking about the lexicon as a part that has to be integrated in a
larger whole has the following advantages:
o The available syntactic theories delimit what needs
to be said in the lexicon. For example, when we are
able to state that a particular argument will be the
first syntactic argument of a certain verb, we feel
confident that our job is done, whether this argument
will then be treated as a "subject" in LFG, the
"last thing on the subcat list" in HPSG, or the "thing
that the verb will agree with" (in the simple case)
in Kiparsky's theory.
o The syntactic theories also push us to make distinctions
that tend to be overlooked in more independent approaches,
for instance the thematic information mentioned above in
(2) and (3).
o The computational tools get a new testing ground, and one
can discuss in a concrete way how the encoding of lexical
information compares to that of other linguistic information.
o An important question is the possibility of finding a
way to define words in terms of unanalyzed notions like
change, cause, and intention that can then feed into/be
fed by semantic theories in which these notions are
interpreted. If such a system can be developed,
we will have a lexicon that both on the syntactic and on
the semantic side is compatible with more than one theory.
In the next few months we will tackle that problem by
trying to determine how our view on lexical semantics fits
in with the semantics developed in STASS and AFL.
By trying to be compatible with syntactic and semantic proposals, we
expect to get a better idea about the place of the lexicon in
linguistic description than would be forthcoming from a study in which
the lexicon is seen as independent.
References
Fenstad, J. E., Halvorsen, P.-K., Langholm, T., and van Benthem, J.
1985. Equations, Schemata, and Situations: A Framework for Linguistic
Semantics. Report No. CSLI-85-29.
end of part 4 of 7
-------
∂15-May-86 2034 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:18:54 PDT
Date: Thu 15 May 86 16:09:45-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
APPROACHES TO COMPUTER LANGUAGES (LACL)
Stuart Shieber and Hans Uszkoreit
Project Participants: Mary Holstege, Stuart Shieber, Hans Uszkoreit
(Project Leader)
The increasing complexity of computer languages (CLs), current
progress in formal linguistics, and the growing importance of
ergonomic factors in CL design is leading to the emergence of a new
field of research between computer science and linguistics.
LACL is a pilot project to investigate the application of methods
and findings from research on natural languages (NLs) to the design
and description of high-level CLs. The linguistically interesting way
to make CLs resemble NLs is not to simply graft English words or
phrases onto the computer language in a superficial way, as is the
common practice in current CL design, but rather to exploit the rich
inventory of encoding strategies that have developed during the
evolution of NLs and to which humans appear especially attuned.
Currently the LACL group is investigating the need for, and
feasibility of, applying linguistic approaches, techniques, and
findings to a set of sample problems. One of these is the use of
partially free word order among the arguments of functions to allow
flexibility in the order of evaluation and to eliminate the need for
the user to memorize arbitrary argument orders. This requires
disambiguation by sort, type, or special marking. In the paragraphs
below this problem serves as an example of the approach to be taken.
All known human languages have some degree of word order freedom.
That means that in every language there are sets of sentences that
have the same content and differ only in the order of their elements.
We call this phenomenon "permutational variation". Although the
permutational variants always share the same truth-conditional
content, they might differ in their conversational meaning, that is,
not all variants might be equally appropriate in the same situational
and textual contexts.
For the application of our research results on permutational
variation to CLs, we selected an area in which permutational variation
has actually already entered existing programming languages: the order
of the arguments of functions (commands, predicates). Functions with
more than one argument in a programming language correspond roughly to
verbs in NLs. In NLs which allow permutation of arguments --
subjects, direct objects, and indirect objects, for instance -- the
arguments can usually be distinguished through some morphological or
lexical markings such as affixes (e.g., case marking) or particles
that accompany the argument (e.g., prepositions or infinitival
markers). Other NLs, however, require that their arguments occur in a
fixed order.
Until recently, the fixed order approach was the only strategy for
designating the arguments of functions in programming languages.
However, certain more recent languages (like ADA, MESA, or ZETALISP)
provide a concept called "keyword" parameters in function calls or
record construction. The function of keywords is not much different
from the function of argument marking in NLs. In fact, some of the
individual implementations of the concept resemble strategies used in
NLs in quite an astounding way. This is especially true for the
mixture of free and fixed argument order. There is no indication that
the designers of the languages have been guided by linguistic
knowledge about these strategies, it just happened that the techniques
were determined to be useful on a rather pretheoretic level.
The use of keywords for marking arguments has been recently
disputed by Richard O'Keefe (1985). O'Keefe suggests that type
checking might be a better strategy to distinguish freely ordered
arguments. However, there is no reason to assume that a choice of a
single principle has to be made. In fact, NLs employ a number of
different strategies in parallel that complement each other in many
ways. It is often a mixture of word order, case marking, semantic, and
pragmatic information that designates the proper argument assignment.
Although there is no need to burden CLs with unnecessary complexity,
the optimal mix of strategies for argument designation needs to be
decided in a theoretically sound way, using all available knowledge
about encoding systems that have proven useful and appropriate in the
languages which are natural for humans.
Other sample problems for our research are:
o The exploitation of parallels between NL descriptions based on
complex structured information (such as f-structures or complex
categories) and type inference in CLs that allow partial
(so-called polymorphic) types.
Current linguistic theories make heavy use of notions of
partial information and identity constraints on information
which lead to a reliance on unification as a technique
for solving these systems of linguistic constraints. Earlier
independent developments in the theory of programming languages
led to the use of unification as a technique for solving type
constraints in typed programming languages. A rich analogy can be
developed along these lines between NL parsing and CL type inference,
which has the potential to contribute to both fields.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
Inheritance of data types found in object-oriented
programming languages have counterparts as tools for structuring
lexicons in NL systems. The technology of such systems
developed for NL processing might serve to help explicate
the corresponding programming constructs and vice versa.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms (pronouns, ellipsis)
in CLs than currently exist.
Long-term issues in the emerging new field that go beyond the scope of
the pilot project include:
o Temporal expressions in the communication among parallel processes.
o The use of speech acts in message passing between objects and
processors.
o The utilization of discourse information to support ellipsis.
References
O'Keefe, R. 1985. Alternatives to Keyword Parameters. SIGPLAN Notices,
June.
LEXICAL PROJECT
Annie Zaenen
Project Participants: Mark Gawron, Kris Halvorsen, Lauri
Karttunen, Martin Kay, Paul Kiparsky,
Mats Rooth, Hans Uszkoreit, Tom Wasow,
Meg Withgott, Annie Zaenen (Project Leader)
The ultimate aim of the Lexical project is to build a lexicon that
is compatible with and useful to the other components of natural
language systems at CSLI. To implement it, we will begin with an
existing on-line dictionary and transform it progressively into what
we need.
Our basic hypothesis is that the syntactic and semantic frameworks
elaborated at CSLI are similar enough for it to be worthwhile to
construct a common lexicon encoded in a form that translates easily
into the different formalisms, rather than to have totally different
encodings for each framework. Given that dictionaries are typically
large, and will, even in the best case, contain more idiosyncratic
information than most components of a natural language system, this is
the only realistic way to proceed.
A lexicon contains morphological, phonological, syntactic, and
semantic information. For our first year of activity we decided to
focus on the syntactic and semantic aspects; the phonological and
morphological sides are better understood, and we assumed it would be
easier in those domains to extract the needed information out of
information already given in existing dictionaries.
In the past months we have investigated what kind of information
should be available to allow syntactic generalizations to be captured.
We started with the syntactic side because we wanted to take advantage
of Kiparsky's current work on this topic and of the presence of Mark
Gawron, a postdoctoral fellow at the center, who has already done
substantial work in this area. Traditionally, generative grammar
presupposes information about syntactic category and
"subcategorization". Our investigation has centered on the role of
thematic information about the arguments of verbs, that is, on the
usefulness of notions like "agent", "source", "theme". This
information is necessary if one wants to capture subregularities like
the relation between the uses of "hit" in "He hit the stick against
the fence" and "He hit the fence with a stick". In the following I
will summarize a few leading ideas that have been established and the
direction that the research is taking.
1. The syntactic behavior of the arguments of predicates is
ultimately based on the meaning of the predicates; hence, an
insightful account should be grounded in semantics. However, it is
useful to pursue the investigation both from the semantic and the
syntactic point of view, as the syntax is the best guide we have at
the moment to linguistically significant generalizations.
2. It is useful to establish equivalence classes that abstract away
from some of the meaning distinctions; for example, the first argument
of the verb "kick" (i.e., the kicker) and that of the verb "kiss"
(i.e., the kisser) have more in common than the first argument of
"kiss" and that of the verb "please" (i.e., the one who is pleased).
How these equivalence classes have to be established is an empirical
question. Representationally there are different ways of addressing
the problem; for example, by having features like "+agentive", by
having roles like "agent", or by having higher predicates like "do"
and "change" whose arguments have by definition the characteristics of
an agent, a theme, etc. Uszkoreit and Zaenen take the latter approach
in the model they are developing, but the technology needed to
implement any of these representations seems to be quite similar.
3. The mapping from thematic information onto syntactic categories is
at least partially hierarchical. For example, a subject cannot be
identified with an agent, a theme, or an experiencer until one knows
the complete set of arguments that a verb takes. But given the
thematic information, large classes of verbs behave in the same way;
for example for some verbs, if there is an agent, it will be the
subject (except in the passive form, for which an independent regular
mapping can be defined).
4. It is possible to represent lexical semantic and syntactic
information using the same kind of constraint-based formalism as is
used in other areas of linguistic modelling at CSLI. (See Fenstad,
Halvorsen, Langholm, and van Benthem, 1985, for the most extensive
discussion of the general ideas.)
5. The information about verb meaning, thematic argument classes, and
the mapping onto the lexical syntax can by and large be encoded using
computational tools already developed in connection with the PATR
project at SRI. They are included in Karttunen's D-PATR grammar
development system that is available at CSLI. This system allows the
grammar writer to use default values which can be changed by later
specifications and lexical rules to transform feature sets in even
more radical ways. For a full description of the system, see "D-PATR:
A Development System for Unification-based Grammar Formalisms" (to
appear as a CSLI Report). While the PATR system is useful, it needs
to be further developed. Disjunction and negation must be available
in the description of lexical entries, and it should also be possible
to assign set values to attributes.
6. Among the more basic theoretical questions remains that of
monotonicity. With overwriting and lexical rules, the specifications
of lexical entries are order-dependent, and thus the system as a whole
does not have the property of monotonicity that is felt to be
desirable in other areas of grammar. The reasons and consequences of
this situation have yet to be addressed in the overall context of
grammar.
Thinking about the lexicon as a part that has to be integrated in a
larger whole has the following advantages:
o The available syntactic theories delimit what needs
to be said in the lexicon. For example, when we are
able to state that a particular argument will be the
first syntactic argument of a certain verb, we feel
confident that our job is done, whether this argument
will then be treated as a "subject" in LFG, the
"last thing on the subcat list" in HPSG, or the "thing
that the verb will agree with" (in the simple case)
in Kiparsky's theory.
o The syntactic theories also push us to make distinctions
that tend to be overlooked in more independent approaches,
for instance the thematic information mentioned above in
(2) and (3).
o The computational tools get a new testing ground, and one
can discuss in a concrete way how the encoding of lexical
information compares to that of other linguistic information.
o An important question is the possibility of finding a
way to define words in terms of unanalyzed notions like
change, cause, and intention that can then feed into/be
fed by semantic theories in which these notions are
interpreted. If such a system can be developed,
we will have a lexicon that both on the syntactic and on
the semantic side is compatible with more than one theory.
In the next few months we will tackle that problem by
trying to determine how our view on lexical semantics fits
in with the semantics developed in STASS and AFL.
By trying to be compatible with syntactic and semantic proposals, we
expect to get a better idea about the place of the lexicon in
linguistic description than would be forthcoming from a study in which
the lexicon is seen as independent.
References
Fenstad, J. E., Halvorsen, P.-K., Langholm, T., and van Benthem, J.
1985. Equations, Schemata, and Situations: A Framework for Linguistic
Semantics. Report No. CSLI-85-29.
end of part 4 of 7
-------
∂15-May-86 2042 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:18:54 PDT
Date: Thu 15 May 86 16:09:45-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 4
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
APPROACHES TO COMPUTER LANGUAGES (LACL)
Stuart Shieber and Hans Uszkoreit
Project Participants: Mary Holstege, Stuart Shieber, Hans Uszkoreit
(Project Leader)
The increasing complexity of computer languages (CLs), current
progress in formal linguistics, and the growing importance of
ergonomic factors in CL design is leading to the emergence of a new
field of research between computer science and linguistics.
LACL is a pilot project to investigate the application of methods
and findings from research on natural languages (NLs) to the design
and description of high-level CLs. The linguistically interesting way
to make CLs resemble NLs is not to simply graft English words or
phrases onto the computer language in a superficial way, as is the
common practice in current CL design, but rather to exploit the rich
inventory of encoding strategies that have developed during the
evolution of NLs and to which humans appear especially attuned.
Currently the LACL group is investigating the need for, and
feasibility of, applying linguistic approaches, techniques, and
findings to a set of sample problems. One of these is the use of
partially free word order among the arguments of functions to allow
flexibility in the order of evaluation and to eliminate the need for
the user to memorize arbitrary argument orders. This requires
disambiguation by sort, type, or special marking. In the paragraphs
below this problem serves as an example of the approach to be taken.
All known human languages have some degree of word order freedom.
That means that in every language there are sets of sentences that
have the same content and differ only in the order of their elements.
We call this phenomenon "permutational variation". Although the
permutational variants always share the same truth-conditional
content, they might differ in their conversational meaning, that is,
not all variants might be equally appropriate in the same situational
and textual contexts.
For the application of our research results on permutational
variation to CLs, we selected an area in which permutational variation
has actually already entered existing programming languages: the order
of the arguments of functions (commands, predicates). Functions with
more than one argument in a programming language correspond roughly to
verbs in NLs. In NLs which allow permutation of arguments --
subjects, direct objects, and indirect objects, for instance -- the
arguments can usually be distinguished through some morphological or
lexical markings such as affixes (e.g., case marking) or particles
that accompany the argument (e.g., prepositions or infinitival
markers). Other NLs, however, require that their arguments occur in a
fixed order.
Until recently, the fixed order approach was the only strategy for
designating the arguments of functions in programming languages.
However, certain more recent languages (like ADA, MESA, or ZETALISP)
provide a concept called "keyword" parameters in function calls or
record construction. The function of keywords is not much different
from the function of argument marking in NLs. In fact, some of the
individual implementations of the concept resemble strategies used in
NLs in quite an astounding way. This is especially true for the
mixture of free and fixed argument order. There is no indication that
the designers of the languages have been guided by linguistic
knowledge about these strategies, it just happened that the techniques
were determined to be useful on a rather pretheoretic level.
The use of keywords for marking arguments has been recently
disputed by Richard O'Keefe (1985). O'Keefe suggests that type
checking might be a better strategy to distinguish freely ordered
arguments. However, there is no reason to assume that a choice of a
single principle has to be made. In fact, NLs employ a number of
different strategies in parallel that complement each other in many
ways. It is often a mixture of word order, case marking, semantic, and
pragmatic information that designates the proper argument assignment.
Although there is no need to burden CLs with unnecessary complexity,
the optimal mix of strategies for argument designation needs to be
decided in a theoretically sound way, using all available knowledge
about encoding systems that have proven useful and appropriate in the
languages which are natural for humans.
Other sample problems for our research are:
o The exploitation of parallels between NL descriptions based on
complex structured information (such as f-structures or complex
categories) and type inference in CLs that allow partial
(so-called polymorphic) types.
Current linguistic theories make heavy use of notions of
partial information and identity constraints on information
which lead to a reliance on unification as a technique
for solving these systems of linguistic constraints. Earlier
independent developments in the theory of programming languages
led to the use of unification as a technique for solving type
constraints in typed programming languages. A rich analogy can be
developed along these lines between NL parsing and CL type inference,
which has the potential to contribute to both fields.
o The use of type inheritance systems for imposing a conceptually
transparent structure on the lexicon.
Inheritance of data types found in object-oriented
programming languages have counterparts as tools for structuring
lexicons in NL systems. The technology of such systems
developed for NL processing might serve to help explicate
the corresponding programming constructs and vice versa.
o The introduction of morphology for marking related lexical items as
to type (derivational morphology), thematic structure (relation
changing), or role (case marking).
o The need for less restricted uses of proforms (pronouns, ellipsis)
in CLs than currently exist.
Long-term issues in the emerging new field that go beyond the scope of
the pilot project include:
o Temporal expressions in the communication among parallel processes.
o The use of speech acts in message passing between objects and
processors.
o The utilization of discourse information to support ellipsis.
References
O'Keefe, R. 1985. Alternatives to Keyword Parameters. SIGPLAN Notices,
June.
LEXICAL PROJECT
Annie Zaenen
Project Participants: Mark Gawron, Kris Halvorsen, Lauri
Karttunen, Martin Kay, Paul Kiparsky,
Mats Rooth, Hans Uszkoreit, Tom Wasow,
Meg Withgott, Annie Zaenen (Project Leader)
The ultimate aim of the Lexical project is to build a lexicon that
is compatible with and useful to the other components of natural
language systems at CSLI. To implement it, we will begin with an
existing on-line dictionary and transform it progressively into what
we need.
Our basic hypothesis is that the syntactic and semantic frameworks
elaborated at CSLI are similar enough for it to be worthwhile to
construct a common lexicon encoded in a form that translates easily
into the different formalisms, rather than to have totally different
encodings for each framework. Given that dictionaries are typically
large, and will, even in the best case, contain more idiosyncratic
information than most components of a natural language system, this is
the only realistic way to proceed.
A lexicon contains morphological, phonological, syntactic, and
semantic information. For our first year of activity we decided to
focus on the syntactic and semantic aspects; the phonological and
morphological sides are better understood, and we assumed it would be
easier in those domains to extract the needed information out of
information already given in existing dictionaries.
In the past months we have investigated what kind of information
should be available to allow syntactic generalizations to be captured.
We started with the syntactic side because we wanted to take advantage
of Kiparsky's current work on this topic and of the presence of Mark
Gawron, a postdoctoral fellow at the center, who has already done
substantial work in this area. Traditionally, generative grammar
presupposes information about syntactic category and
"subcategorization". Our investigation has centered on the role of
thematic information about the arguments of verbs, that is, on the
usefulness of notions like "agent", "source", "theme". This
information is necessary if one wants to capture subregularities like
the relation between the uses of "hit" in "He hit the stick against
the fence" and "He hit the fence with a stick". In the following I
will summarize a few leading ideas that have been established and the
direction that the research is taking.
1. The syntactic behavior of the arguments of predicates is
ultimately based on the meaning of the predicates; hence, an
insightful account should be grounded in semantics. However, it is
useful to pursue the investigation both from the semantic and the
syntactic point of view, as the syntax is the best guide we have at
the moment to linguistically significant generalizations.
2. It is useful to establish equivalence classes that abstract away
from some of the meaning distinctions; for example, the first argument
of the verb "kick" (i.e., the kicker) and that of the verb "kiss"
(i.e., the kisser) have more in common than the first argument of
"kiss" and that of the verb "please" (i.e., the one who is pleased).
How these equivalence classes have to be established is an empirical
question. Representationally there are different ways of addressing
the problem; for example, by having features like "+agentive", by
having roles like "agent", or by having higher predicates like "do"
and "change" whose arguments have by definition the characteristics of
an agent, a theme, etc. Uszkoreit and Zaenen take the latter approach
in the model they are developing, but the technology needed to
implement any of these representations seems to be quite similar.
3. The mapping from thematic information onto syntactic categories is
at least partially hierarchical. For example, a subject cannot be
identified with an agent, a theme, or an experiencer until one knows
the complete set of arguments that a verb takes. But given the
thematic information, large classes of verbs behave in the same way;
for example for some verbs, if there is an agent, it will be the
subject (except in the passive form, for which an independent regular
mapping can be defined).
4. It is possible to represent lexical semantic and syntactic
information using the same kind of constraint-based formalism as is
used in other areas of linguistic modelling at CSLI. (See Fenstad,
Halvorsen, Langholm, and van Benthem, 1985, for the most extensive
discussion of the general ideas.)
5. The information about verb meaning, thematic argument classes, and
the mapping onto the lexical syntax can by and large be encoded using
computational tools already developed in connection with the PATR
project at SRI. They are included in Karttunen's D-PATR grammar
development system that is available at CSLI. This system allows the
grammar writer to use default values which can be changed by later
specifications and lexical rules to transform feature sets in even
more radical ways. For a full description of the system, see "D-PATR:
A Development System for Unification-based Grammar Formalisms" (to
appear as a CSLI Report). While the PATR system is useful, it needs
to be further developed. Disjunction and negation must be available
in the description of lexical entries, and it should also be possible
to assign set values to attributes.
6. Among the more basic theoretical questions remains that of
monotonicity. With overwriting and lexical rules, the specifications
of lexical entries are order-dependent, and thus the system as a whole
does not have the property of monotonicity that is felt to be
desirable in other areas of grammar. The reasons and consequences of
this situation have yet to be addressed in the overall context of
grammar.
Thinking about the lexicon as a part that has to be integrated in a
larger whole has the following advantages:
o The available syntactic theories delimit what needs
to be said in the lexicon. For example, when we are
able to state that a particular argument will be the
first syntactic argument of a certain verb, we feel
confident that our job is done, whether this argument
will then be treated as a "subject" in LFG, the
"last thing on the subcat list" in HPSG, or the "thing
that the verb will agree with" (in the simple case)
in Kiparsky's theory.
o The syntactic theories also push us to make distinctions
that tend to be overlooked in more independent approaches,
for instance the thematic information mentioned above in
(2) and (3).
o The computational tools get a new testing ground, and one
can discuss in a concrete way how the encoding of lexical
information compares to that of other linguistic information.
o An important question is the possibility of finding a
way to define words in terms of unanalyzed notions like
change, cause, and intention that can then feed into/be
fed by semantic theories in which these notions are
interpreted. If such a system can be developed,
we will have a lexicon that both on the syntactic and on
the semantic side is compatible with more than one theory.
In the next few months we will tackle that problem by
trying to determine how our view on lexical semantics fits
in with the semantics developed in STASS and AFL.
By trying to be compatible with syntactic and semantic proposals, we
expect to get a better idea about the place of the lexicon in
linguistic description than would be forthcoming from a study in which
the lexicon is seen as independent.
References
Fenstad, J. E., Halvorsen, P.-K., Langholm, T., and van Benthem, J.
1985. Equations, Schemata, and Situations: A Framework for Linguistic
Semantics. Report No. CSLI-85-29.
end of part 4 of 7
-------
∂15-May-86 2052 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:52:18 PDT
Date: Thu 15 May 86 16:11:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PHONOLOGY AND PHONETICS
Paul Kiparsky
Project Participants: Mark Cobler, Carlos Gussenhoven, Sharon
Inkelas, Paul Kiparsky (Project Leader),
Will Leben, Marcy Macken, Bill Poser,
Meg Withgott
Goals
This project is focused on postlexical phonology and its relation to
lexical phonology on the one hand, and to phonetic realization on the
other. We have been concentrating on three overlapping areas:
(1) tone and intonation (Leben, Poser),
(2) phonological phrasing and phonological processes which
apply in phrasal domains (Kiparsky, Poser), and
(3) formal properties of phonological rules and
representations (Kiparsky, Poser).
These are traditional concerns of phonology and (in part) of
phonetics, but we are approaching them in a somewhat new way which
seeks to unify those two disciplines and to integrate them with
linguistic theory. From that perspective, the important desiderata
are: (1) to fit the quantitative data obtained from instrumental
phonetic work into a phonological model that has independent
theoretical support, instead of constructing models on a more or less
ad hoc basis, (2) to construct substantial rule systems rather than
focusing, as is possible in some kinds of phonetic and phonological
research, on isolated rules or phenomena, and (3) to develop a
phonological theory consistent with a restrictive theory of grammar
such as those emerging from ongoing work at CSLI and elsewhere --
ambitions which, needless to say, have not made our lives any easier,
though they have made them a lot more interesting.
Tone and Intonation
Intonation in Tone Languages. Leben and Poser have collaborated on a
project on intonation in tonal languages (languages in which words
have different inherent pitch patterns), a topic about which almost
nothing is known. Most of the work has gone into analyzing data on
Hausa intonation that Leben collected in Nigeria last year, with the
help of Cobler and Inkelas (Leben, Cobler, and Inkelas 1986). They
discovered that a number of different intonational phenomena in Hausa
depend for their realization on phrase boundaries. These boundaries
are not typical phonological phrases (in particular, they are not in
general separated from one another by pauses); rather they correspond
to major syntactic boundaries, between NP and VP, and between V and
the different NP and adverbial complements of the verb. Drawing on
other work in autosegmental phonology, they propose that there is a
separate tier on which phrasal tone is represented, distinct from the
tier on which lexical tone is represented. By associating both the
High phrasal tone associated with the extra-High register used for
questions and for emphasis and the Low phrasal tone which describes
downdrift, they have been able to account formally for the apparent
complementarity of register raising and downdrift. They also offer an
alternative explanation of apparent evidence for utterance preplanning
in Hausa, namely that syntactic phrases may be preplanned but that
downdrift itself is not.
Pitch Accent. Withgott has continued her joint research with
Halvorsen on the phonetics and phonology of East Norwegian accent. In
a previous study (Withgott and Halvorsen, 1984) they argued that the
prosodic phenomenon of accent in Norwegian depends on the placement of
stress, morphological composition, and on regularities in the lexical
and postlexical phonology (rather than on a syllable-counting rule).
Using data derived from a computer-readable dictionary, they have now
(Withgott and Halvorsen, forthcoming) been able to provide further
support for their analysis through a quantitative study of the
accentual properties of compounds. Moreover, they have been able to
demonstrate that their account correctly predicts hitherto unobserved
phonetic differences between accents "1" and "2". This finding
disconfirms previous analyses which maintain that the two accents
reflect only one phonetic contour displaced in time.
Intonation Seminar. During the spring quarter, Leben, Gussenhoven,
and Poser are conducting a seminar on intonation. It covers background
material as well as current work being done at CSLI and elsewhere.
Participants include Withgott, Jared Bernstein (SRI), Ann Cessaris
(Key Communication in Menlo Park), Anne Fernald (Psychology), and a
number of Linguistics students.
Phrasal Phonology
Questions being addressed here include: How is phonological phrasing
related to syntactic structure? Can syntactic structure condition
phonological rules directly, or only indirectly via phrasing? How do
the properties of phrasal phonological rules differ from those of
lexical rules and of postlexical rules which apply across phrasal
domains? Where do so-called "phonetic rules" fit into the emerging
picture of the organization of the phonological component?
The reason these questions are up in the air is that several recent
developments have made untenable the hitherto standard picture of the
organization of phonology. According to this standard picture, the
rules of the phonological component map underlying representations
onto phonetic representations, which encode the linguistically
determined aspects of pronunciation; phonetic representations are in
turn related to the observed speech signal by largely universal rules
of phonetic implementation. One reason why this view bears rethinking
is that the theory of Lexical Phonology (Kiparsky 1982, 1985; Mohanan
1982) posits the existence of a linguistically significant
intermediate level, the level of lexical representation. The rules
which map underlying representations onto lexical representations turn
out to have very different properties from the rules which map lexical
representations onto phonetic representations. Secondly, research in
phonetics (Liberman and Pierrehumbert, 1984) suggests that there exist
language-particular context-sensitive rules which manipulate low-level
continuously-valued parameters of the sort assumed to be
nonphonological in character. Third, studies of connected speech
(Selkirk, 1984) have led to the postulation of a prosodic hierarchy
which governs the application of phonological processes to
combinations of words.
These were originally separate lines of investigation, but Poser
and Kiparsky are finding that considerations from all three converge
in a surprising way: there appears to be a fairly clear-cut division
of postlexical rules onto two types, "phrasal" and "phonetic" rules,
which differ with respect to conditioning, domain, and discreteness as
follows:
PHRASAL RULES PHONETIC RULES
o subject to morphological-lexical o subject to phonological
conditioning conditioning only
o restricted to minor phrases o applicable also in larger
prosodic units
o manipulate discrete feature o manipulate continuous values
values
Table 1. A possible general typology of postlexical rules.
The same typology appears to extend to cliticization processes as
well.
We are currently investigating the possibility of assigning the two
types of postlexical rules to different modules of grammar, and
explaining their properties by the principles of those modules.
Formal Properties of Rules and Representations
Underspecification and Constraints on Rules. One of the basic ideas
of Lexical Phonology is that lexical representations are incompletely
specified and receive their nondistinctive feature specifications from
the phonological rules of the language and from universal default
rules. Recently, Kiparsky has explored the possibility that this
underspecified character of lexical representations explains certain
well-known properties of phonological rules which have so far been
accounted for by means of a range of unrelated constraints. One such
property is the restriction of rules to "derived environments" (the
"Strict Cycle Condition"). Another is the commonly encountered
failure of rules to apply if the undergoing segment is in a branching
constituent ("C-command"). Both are derivable from the proper
formulation of underspecification and the principles governing the
application of default rules. This makes it possible to impose
significant constraints on the role of syntactic information in phrase
phonology.
Underspecification and Overgeneralization. A tough problem for
linguistic theory is how learners infer abstract grammatical
structures and prune overly-general rules without explicit negative
information (i.e., without explicit correction). Marcy Macken has
developed an account of phonological acquisition that promises to
solve this long-standing puzzle. Her model distinguishes formal
(algebraic) structures of phonological representations, semantic
(particularly stochastic and geometric) properties of phonetic
interpretation, and the nonformal informational structures across time
in the environment. This has lead to an investigation of the role of
underspecification and default mechanisms in the overall organization
of the phonological grammar and consideration of constraints on the
formal system that come, not from properties of the abstract system,
but from properties of its extensional system.
Rules and Representation. Poser has been continuing to work on a
theory of phonological rules. This effort is intended both to
establish a more highly constrained system than has hitherto been
available, based upon general principles rather than ad hoc
constraints, and to provide a conceptual analysis and formalization of
the relevant notions. Recent results include a unified account of the
class of phenomena involving exempt peripheral elements, which
constrains the exempt material to single peripheral constituents
(Poser, 1986b), and work on the role of constituency in phonological
representations (Poser, 1986a). The latter bears on the relationship
between phonological representations and phonological rules and
especially on the way in which phonological representations transmit
information. The central point is that the motivated phonological
representation of stress permits the transmission of information about
the morphological structure that would otherwise be prohibited.
References
Kiparsky, P. 1985. Some Consequences of Lexical Phonology. In Colin
Ewen (ed.), Phonology Yearbook, Vol. II. Cambridge University Press.
Kiparsky, P. 1982. Lexical Morphology and Phonology. In I.-S. Yang
(ed.), Linguistics in the Morning Calm. Seoul: Hanshin.
Liberman, M. and Pierrehumbert, J. 1984. Intonational Invariance
Under Changes in Pitch Range and Length. In Mark Aronoff and Richard
Dehrle (eds.), Language Sound Structure. Cambridge, MA: MIT Press.
Mohanan, K. P. 1982. Lexical Phonology. Thesis, MIT. Reproduced by
Indiana University Linguistics Club.
Poser, W. (a). 1986. Diyari Stress, Metrical Structure Assignment,
and Metrical Representation. Fifth West Coast Conference on Formal
Linguistics, University of Washington, Seattle, Washington, 22 March
1986.
Poser, W. (b). 1986. Invisibility. GLOW Colloquium, Girona, Spain, 8
April 1986.
Selkirk, E. 1984. Phonology and Syntax: The Relation between Sound
and Structure. Cambridge, MA: MIT Press.
Withgott, M. and Halvorsen, P.-K. 1984. Morphological Constraints on
Scandinavian Tone Accent. Report No. CSLI-84-11.
Withgott, M. and Halvorsen, P.-K. To appear. Phonetics and
Phonological Conditions Bearing on the Representation of East
Norwegian Accent. In N. Smith and H. van der Hullot (eds.),
Autosegmental Studies on Pitch Accent. Dordrecht: Foris.
end of part 5 of 7
-------
∂15-May-86 2057 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:52:18 PDT
Date: Thu 15 May 86 16:11:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PHONOLOGY AND PHONETICS
Paul Kiparsky
Project Participants: Mark Cobler, Carlos Gussenhoven, Sharon
Inkelas, Paul Kiparsky (Project Leader),
Will Leben, Marcy Macken, Bill Poser,
Meg Withgott
Goals
This project is focused on postlexical phonology and its relation to
lexical phonology on the one hand, and to phonetic realization on the
other. We have been concentrating on three overlapping areas:
(1) tone and intonation (Leben, Poser),
(2) phonological phrasing and phonological processes which
apply in phrasal domains (Kiparsky, Poser), and
(3) formal properties of phonological rules and
representations (Kiparsky, Poser).
These are traditional concerns of phonology and (in part) of
phonetics, but we are approaching them in a somewhat new way which
seeks to unify those two disciplines and to integrate them with
linguistic theory. From that perspective, the important desiderata
are: (1) to fit the quantitative data obtained from instrumental
phonetic work into a phonological model that has independent
theoretical support, instead of constructing models on a more or less
ad hoc basis, (2) to construct substantial rule systems rather than
focusing, as is possible in some kinds of phonetic and phonological
research, on isolated rules or phenomena, and (3) to develop a
phonological theory consistent with a restrictive theory of grammar
such as those emerging from ongoing work at CSLI and elsewhere --
ambitions which, needless to say, have not made our lives any easier,
though they have made them a lot more interesting.
Tone and Intonation
Intonation in Tone Languages. Leben and Poser have collaborated on a
project on intonation in tonal languages (languages in which words
have different inherent pitch patterns), a topic about which almost
nothing is known. Most of the work has gone into analyzing data on
Hausa intonation that Leben collected in Nigeria last year, with the
help of Cobler and Inkelas (Leben, Cobler, and Inkelas 1986). They
discovered that a number of different intonational phenomena in Hausa
depend for their realization on phrase boundaries. These boundaries
are not typical phonological phrases (in particular, they are not in
general separated from one another by pauses); rather they correspond
to major syntactic boundaries, between NP and VP, and between V and
the different NP and adverbial complements of the verb. Drawing on
other work in autosegmental phonology, they propose that there is a
separate tier on which phrasal tone is represented, distinct from the
tier on which lexical tone is represented. By associating both the
High phrasal tone associated with the extra-High register used for
questions and for emphasis and the Low phrasal tone which describes
downdrift, they have been able to account formally for the apparent
complementarity of register raising and downdrift. They also offer an
alternative explanation of apparent evidence for utterance preplanning
in Hausa, namely that syntactic phrases may be preplanned but that
downdrift itself is not.
Pitch Accent. Withgott has continued her joint research with
Halvorsen on the phonetics and phonology of East Norwegian accent. In
a previous study (Withgott and Halvorsen, 1984) they argued that the
prosodic phenomenon of accent in Norwegian depends on the placement of
stress, morphological composition, and on regularities in the lexical
and postlexical phonology (rather than on a syllable-counting rule).
Using data derived from a computer-readable dictionary, they have now
(Withgott and Halvorsen, forthcoming) been able to provide further
support for their analysis through a quantitative study of the
accentual properties of compounds. Moreover, they have been able to
demonstrate that their account correctly predicts hitherto unobserved
phonetic differences between accents "1" and "2". This finding
disconfirms previous analyses which maintain that the two accents
reflect only one phonetic contour displaced in time.
Intonation Seminar. During the spring quarter, Leben, Gussenhoven,
and Poser are conducting a seminar on intonation. It covers background
material as well as current work being done at CSLI and elsewhere.
Participants include Withgott, Jared Bernstein (SRI), Ann Cessaris
(Key Communication in Menlo Park), Anne Fernald (Psychology), and a
number of Linguistics students.
Phrasal Phonology
Questions being addressed here include: How is phonological phrasing
related to syntactic structure? Can syntactic structure condition
phonological rules directly, or only indirectly via phrasing? How do
the properties of phrasal phonological rules differ from those of
lexical rules and of postlexical rules which apply across phrasal
domains? Where do so-called "phonetic rules" fit into the emerging
picture of the organization of the phonological component?
The reason these questions are up in the air is that several recent
developments have made untenable the hitherto standard picture of the
organization of phonology. According to this standard picture, the
rules of the phonological component map underlying representations
onto phonetic representations, which encode the linguistically
determined aspects of pronunciation; phonetic representations are in
turn related to the observed speech signal by largely universal rules
of phonetic implementation. One reason why this view bears rethinking
is that the theory of Lexical Phonology (Kiparsky 1982, 1985; Mohanan
1982) posits the existence of a linguistically significant
intermediate level, the level of lexical representation. The rules
which map underlying representations onto lexical representations turn
out to have very different properties from the rules which map lexical
representations onto phonetic representations. Secondly, research in
phonetics (Liberman and Pierrehumbert, 1984) suggests that there exist
language-particular context-sensitive rules which manipulate low-level
continuously-valued parameters of the sort assumed to be
nonphonological in character. Third, studies of connected speech
(Selkirk, 1984) have led to the postulation of a prosodic hierarchy
which governs the application of phonological processes to
combinations of words.
These were originally separate lines of investigation, but Poser
and Kiparsky are finding that considerations from all three converge
in a surprising way: there appears to be a fairly clear-cut division
of postlexical rules onto two types, "phrasal" and "phonetic" rules,
which differ with respect to conditioning, domain, and discreteness as
follows:
PHRASAL RULES PHONETIC RULES
o subject to morphological-lexical o subject to phonological
conditioning conditioning only
o restricted to minor phrases o applicable also in larger
prosodic units
o manipulate discrete feature o manipulate continuous values
values
Table 1. A possible general typology of postlexical rules.
The same typology appears to extend to cliticization processes as
well.
We are currently investigating the possibility of assigning the two
types of postlexical rules to different modules of grammar, and
explaining their properties by the principles of those modules.
Formal Properties of Rules and Representations
Underspecification and Constraints on Rules. One of the basic ideas
of Lexical Phonology is that lexical representations are incompletely
specified and receive their nondistinctive feature specifications from
the phonological rules of the language and from universal default
rules. Recently, Kiparsky has explored the possibility that this
underspecified character of lexical representations explains certain
well-known properties of phonological rules which have so far been
accounted for by means of a range of unrelated constraints. One such
property is the restriction of rules to "derived environments" (the
"Strict Cycle Condition"). Another is the commonly encountered
failure of rules to apply if the undergoing segment is in a branching
constituent ("C-command"). Both are derivable from the proper
formulation of underspecification and the principles governing the
application of default rules. This makes it possible to impose
significant constraints on the role of syntactic information in phrase
phonology.
Underspecification and Overgeneralization. A tough problem for
linguistic theory is how learners infer abstract grammatical
structures and prune overly-general rules without explicit negative
information (i.e., without explicit correction). Marcy Macken has
developed an account of phonological acquisition that promises to
solve this long-standing puzzle. Her model distinguishes formal
(algebraic) structures of phonological representations, semantic
(particularly stochastic and geometric) properties of phonetic
interpretation, and the nonformal informational structures across time
in the environment. This has lead to an investigation of the role of
underspecification and default mechanisms in the overall organization
of the phonological grammar and consideration of constraints on the
formal system that come, not from properties of the abstract system,
but from properties of its extensional system.
Rules and Representation. Poser has been continuing to work on a
theory of phonological rules. This effort is intended both to
establish a more highly constrained system than has hitherto been
available, based upon general principles rather than ad hoc
constraints, and to provide a conceptual analysis and formalization of
the relevant notions. Recent results include a unified account of the
class of phenomena involving exempt peripheral elements, which
constrains the exempt material to single peripheral constituents
(Poser, 1986b), and work on the role of constituency in phonological
representations (Poser, 1986a). The latter bears on the relationship
between phonological representations and phonological rules and
especially on the way in which phonological representations transmit
information. The central point is that the motivated phonological
representation of stress permits the transmission of information about
the morphological structure that would otherwise be prohibited.
References
Kiparsky, P. 1985. Some Consequences of Lexical Phonology. In Colin
Ewen (ed.), Phonology Yearbook, Vol. II. Cambridge University Press.
Kiparsky, P. 1982. Lexical Morphology and Phonology. In I.-S. Yang
(ed.), Linguistics in the Morning Calm. Seoul: Hanshin.
Liberman, M. and Pierrehumbert, J. 1984. Intonational Invariance
Under Changes in Pitch Range and Length. In Mark Aronoff and Richard
Dehrle (eds.), Language Sound Structure. Cambridge, MA: MIT Press.
Mohanan, K. P. 1982. Lexical Phonology. Thesis, MIT. Reproduced by
Indiana University Linguistics Club.
Poser, W. (a). 1986. Diyari Stress, Metrical Structure Assignment,
and Metrical Representation. Fifth West Coast Conference on Formal
Linguistics, University of Washington, Seattle, Washington, 22 March
1986.
Poser, W. (b). 1986. Invisibility. GLOW Colloquium, Girona, Spain, 8
April 1986.
Selkirk, E. 1984. Phonology and Syntax: The Relation between Sound
and Structure. Cambridge, MA: MIT Press.
Withgott, M. and Halvorsen, P.-K. 1984. Morphological Constraints on
Scandinavian Tone Accent. Report No. CSLI-84-11.
Withgott, M. and Halvorsen, P.-K. To appear. Phonetics and
Phonological Conditions Bearing on the Representation of East
Norwegian Accent. In N. Smith and H. van der Hullot (eds.),
Autosegmental Studies on Pitch Accent. Dordrecht: Foris.
end of part 5 of 7
-------
∂15-May-86 2103 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:52:18 PDT
Date: Thu 15 May 86 16:11:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PHONOLOGY AND PHONETICS
Paul Kiparsky
Project Participants: Mark Cobler, Carlos Gussenhoven, Sharon
Inkelas, Paul Kiparsky (Project Leader),
Will Leben, Marcy Macken, Bill Poser,
Meg Withgott
Goals
This project is focused on postlexical phonology and its relation to
lexical phonology on the one hand, and to phonetic realization on the
other. We have been concentrating on three overlapping areas:
(1) tone and intonation (Leben, Poser),
(2) phonological phrasing and phonological processes which
apply in phrasal domains (Kiparsky, Poser), and
(3) formal properties of phonological rules and
representations (Kiparsky, Poser).
These are traditional concerns of phonology and (in part) of
phonetics, but we are approaching them in a somewhat new way which
seeks to unify those two disciplines and to integrate them with
linguistic theory. From that perspective, the important desiderata
are: (1) to fit the quantitative data obtained from instrumental
phonetic work into a phonological model that has independent
theoretical support, instead of constructing models on a more or less
ad hoc basis, (2) to construct substantial rule systems rather than
focusing, as is possible in some kinds of phonetic and phonological
research, on isolated rules or phenomena, and (3) to develop a
phonological theory consistent with a restrictive theory of grammar
such as those emerging from ongoing work at CSLI and elsewhere --
ambitions which, needless to say, have not made our lives any easier,
though they have made them a lot more interesting.
Tone and Intonation
Intonation in Tone Languages. Leben and Poser have collaborated on a
project on intonation in tonal languages (languages in which words
have different inherent pitch patterns), a topic about which almost
nothing is known. Most of the work has gone into analyzing data on
Hausa intonation that Leben collected in Nigeria last year, with the
help of Cobler and Inkelas (Leben, Cobler, and Inkelas 1986). They
discovered that a number of different intonational phenomena in Hausa
depend for their realization on phrase boundaries. These boundaries
are not typical phonological phrases (in particular, they are not in
general separated from one another by pauses); rather they correspond
to major syntactic boundaries, between NP and VP, and between V and
the different NP and adverbial complements of the verb. Drawing on
other work in autosegmental phonology, they propose that there is a
separate tier on which phrasal tone is represented, distinct from the
tier on which lexical tone is represented. By associating both the
High phrasal tone associated with the extra-High register used for
questions and for emphasis and the Low phrasal tone which describes
downdrift, they have been able to account formally for the apparent
complementarity of register raising and downdrift. They also offer an
alternative explanation of apparent evidence for utterance preplanning
in Hausa, namely that syntactic phrases may be preplanned but that
downdrift itself is not.
Pitch Accent. Withgott has continued her joint research with
Halvorsen on the phonetics and phonology of East Norwegian accent. In
a previous study (Withgott and Halvorsen, 1984) they argued that the
prosodic phenomenon of accent in Norwegian depends on the placement of
stress, morphological composition, and on regularities in the lexical
and postlexical phonology (rather than on a syllable-counting rule).
Using data derived from a computer-readable dictionary, they have now
(Withgott and Halvorsen, forthcoming) been able to provide further
support for their analysis through a quantitative study of the
accentual properties of compounds. Moreover, they have been able to
demonstrate that their account correctly predicts hitherto unobserved
phonetic differences between accents "1" and "2". This finding
disconfirms previous analyses which maintain that the two accents
reflect only one phonetic contour displaced in time.
Intonation Seminar. During the spring quarter, Leben, Gussenhoven,
and Poser are conducting a seminar on intonation. It covers background
material as well as current work being done at CSLI and elsewhere.
Participants include Withgott, Jared Bernstein (SRI), Ann Cessaris
(Key Communication in Menlo Park), Anne Fernald (Psychology), and a
number of Linguistics students.
Phrasal Phonology
Questions being addressed here include: How is phonological phrasing
related to syntactic structure? Can syntactic structure condition
phonological rules directly, or only indirectly via phrasing? How do
the properties of phrasal phonological rules differ from those of
lexical rules and of postlexical rules which apply across phrasal
domains? Where do so-called "phonetic rules" fit into the emerging
picture of the organization of the phonological component?
The reason these questions are up in the air is that several recent
developments have made untenable the hitherto standard picture of the
organization of phonology. According to this standard picture, the
rules of the phonological component map underlying representations
onto phonetic representations, which encode the linguistically
determined aspects of pronunciation; phonetic representations are in
turn related to the observed speech signal by largely universal rules
of phonetic implementation. One reason why this view bears rethinking
is that the theory of Lexical Phonology (Kiparsky 1982, 1985; Mohanan
1982) posits the existence of a linguistically significant
intermediate level, the level of lexical representation. The rules
which map underlying representations onto lexical representations turn
out to have very different properties from the rules which map lexical
representations onto phonetic representations. Secondly, research in
phonetics (Liberman and Pierrehumbert, 1984) suggests that there exist
language-particular context-sensitive rules which manipulate low-level
continuously-valued parameters of the sort assumed to be
nonphonological in character. Third, studies of connected speech
(Selkirk, 1984) have led to the postulation of a prosodic hierarchy
which governs the application of phonological processes to
combinations of words.
These were originally separate lines of investigation, but Poser
and Kiparsky are finding that considerations from all three converge
in a surprising way: there appears to be a fairly clear-cut division
of postlexical rules onto two types, "phrasal" and "phonetic" rules,
which differ with respect to conditioning, domain, and discreteness as
follows:
PHRASAL RULES PHONETIC RULES
o subject to morphological-lexical o subject to phonological
conditioning conditioning only
o restricted to minor phrases o applicable also in larger
prosodic units
o manipulate discrete feature o manipulate continuous values
values
Table 1. A possible general typology of postlexical rules.
The same typology appears to extend to cliticization processes as
well.
We are currently investigating the possibility of assigning the two
types of postlexical rules to different modules of grammar, and
explaining their properties by the principles of those modules.
Formal Properties of Rules and Representations
Underspecification and Constraints on Rules. One of the basic ideas
of Lexical Phonology is that lexical representations are incompletely
specified and receive their nondistinctive feature specifications from
the phonological rules of the language and from universal default
rules. Recently, Kiparsky has explored the possibility that this
underspecified character of lexical representations explains certain
well-known properties of phonological rules which have so far been
accounted for by means of a range of unrelated constraints. One such
property is the restriction of rules to "derived environments" (the
"Strict Cycle Condition"). Another is the commonly encountered
failure of rules to apply if the undergoing segment is in a branching
constituent ("C-command"). Both are derivable from the proper
formulation of underspecification and the principles governing the
application of default rules. This makes it possible to impose
significant constraints on the role of syntactic information in phrase
phonology.
Underspecification and Overgeneralization. A tough problem for
linguistic theory is how learners infer abstract grammatical
structures and prune overly-general rules without explicit negative
information (i.e., without explicit correction). Marcy Macken has
developed an account of phonological acquisition that promises to
solve this long-standing puzzle. Her model distinguishes formal
(algebraic) structures of phonological representations, semantic
(particularly stochastic and geometric) properties of phonetic
interpretation, and the nonformal informational structures across time
in the environment. This has lead to an investigation of the role of
underspecification and default mechanisms in the overall organization
of the phonological grammar and consideration of constraints on the
formal system that come, not from properties of the abstract system,
but from properties of its extensional system.
Rules and Representation. Poser has been continuing to work on a
theory of phonological rules. This effort is intended both to
establish a more highly constrained system than has hitherto been
available, based upon general principles rather than ad hoc
constraints, and to provide a conceptual analysis and formalization of
the relevant notions. Recent results include a unified account of the
class of phenomena involving exempt peripheral elements, which
constrains the exempt material to single peripheral constituents
(Poser, 1986b), and work on the role of constituency in phonological
representations (Poser, 1986a). The latter bears on the relationship
between phonological representations and phonological rules and
especially on the way in which phonological representations transmit
information. The central point is that the motivated phonological
representation of stress permits the transmission of information about
the morphological structure that would otherwise be prohibited.
References
Kiparsky, P. 1985. Some Consequences of Lexical Phonology. In Colin
Ewen (ed.), Phonology Yearbook, Vol. II. Cambridge University Press.
Kiparsky, P. 1982. Lexical Morphology and Phonology. In I.-S. Yang
(ed.), Linguistics in the Morning Calm. Seoul: Hanshin.
Liberman, M. and Pierrehumbert, J. 1984. Intonational Invariance
Under Changes in Pitch Range and Length. In Mark Aronoff and Richard
Dehrle (eds.), Language Sound Structure. Cambridge, MA: MIT Press.
Mohanan, K. P. 1982. Lexical Phonology. Thesis, MIT. Reproduced by
Indiana University Linguistics Club.
Poser, W. (a). 1986. Diyari Stress, Metrical Structure Assignment,
and Metrical Representation. Fifth West Coast Conference on Formal
Linguistics, University of Washington, Seattle, Washington, 22 March
1986.
Poser, W. (b). 1986. Invisibility. GLOW Colloquium, Girona, Spain, 8
April 1986.
Selkirk, E. 1984. Phonology and Syntax: The Relation between Sound
and Structure. Cambridge, MA: MIT Press.
Withgott, M. and Halvorsen, P.-K. 1984. Morphological Constraints on
Scandinavian Tone Accent. Report No. CSLI-84-11.
Withgott, M. and Halvorsen, P.-K. To appear. Phonetics and
Phonological Conditions Bearing on the Representation of East
Norwegian Accent. In N. Smith and H. van der Hullot (eds.),
Autosegmental Studies on Pitch Accent. Dordrecht: Foris.
end of part 5 of 7
-------
∂15-May-86 2113 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:52:18 PDT
Date: Thu 15 May 86 16:11:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PHONOLOGY AND PHONETICS
Paul Kiparsky
Project Participants: Mark Cobler, Carlos Gussenhoven, Sharon
Inkelas, Paul Kiparsky (Project Leader),
Will Leben, Marcy Macken, Bill Poser,
Meg Withgott
Goals
This project is focused on postlexical phonology and its relation to
lexical phonology on the one hand, and to phonetic realization on the
other. We have been concentrating on three overlapping areas:
(1) tone and intonation (Leben, Poser),
(2) phonological phrasing and phonological processes which
apply in phrasal domains (Kiparsky, Poser), and
(3) formal properties of phonological rules and
representations (Kiparsky, Poser).
These are traditional concerns of phonology and (in part) of
phonetics, but we are approaching them in a somewhat new way which
seeks to unify those two disciplines and to integrate them with
linguistic theory. From that perspective, the important desiderata
are: (1) to fit the quantitative data obtained from instrumental
phonetic work into a phonological model that has independent
theoretical support, instead of constructing models on a more or less
ad hoc basis, (2) to construct substantial rule systems rather than
focusing, as is possible in some kinds of phonetic and phonological
research, on isolated rules or phenomena, and (3) to develop a
phonological theory consistent with a restrictive theory of grammar
such as those emerging from ongoing work at CSLI and elsewhere --
ambitions which, needless to say, have not made our lives any easier,
though they have made them a lot more interesting.
Tone and Intonation
Intonation in Tone Languages. Leben and Poser have collaborated on a
project on intonation in tonal languages (languages in which words
have different inherent pitch patterns), a topic about which almost
nothing is known. Most of the work has gone into analyzing data on
Hausa intonation that Leben collected in Nigeria last year, with the
help of Cobler and Inkelas (Leben, Cobler, and Inkelas 1986). They
discovered that a number of different intonational phenomena in Hausa
depend for their realization on phrase boundaries. These boundaries
are not typical phonological phrases (in particular, they are not in
general separated from one another by pauses); rather they correspond
to major syntactic boundaries, between NP and VP, and between V and
the different NP and adverbial complements of the verb. Drawing on
other work in autosegmental phonology, they propose that there is a
separate tier on which phrasal tone is represented, distinct from the
tier on which lexical tone is represented. By associating both the
High phrasal tone associated with the extra-High register used for
questions and for emphasis and the Low phrasal tone which describes
downdrift, they have been able to account formally for the apparent
complementarity of register raising and downdrift. They also offer an
alternative explanation of apparent evidence for utterance preplanning
in Hausa, namely that syntactic phrases may be preplanned but that
downdrift itself is not.
Pitch Accent. Withgott has continued her joint research with
Halvorsen on the phonetics and phonology of East Norwegian accent. In
a previous study (Withgott and Halvorsen, 1984) they argued that the
prosodic phenomenon of accent in Norwegian depends on the placement of
stress, morphological composition, and on regularities in the lexical
and postlexical phonology (rather than on a syllable-counting rule).
Using data derived from a computer-readable dictionary, they have now
(Withgott and Halvorsen, forthcoming) been able to provide further
support for their analysis through a quantitative study of the
accentual properties of compounds. Moreover, they have been able to
demonstrate that their account correctly predicts hitherto unobserved
phonetic differences between accents "1" and "2". This finding
disconfirms previous analyses which maintain that the two accents
reflect only one phonetic contour displaced in time.
Intonation Seminar. During the spring quarter, Leben, Gussenhoven,
and Poser are conducting a seminar on intonation. It covers background
material as well as current work being done at CSLI and elsewhere.
Participants include Withgott, Jared Bernstein (SRI), Ann Cessaris
(Key Communication in Menlo Park), Anne Fernald (Psychology), and a
number of Linguistics students.
Phrasal Phonology
Questions being addressed here include: How is phonological phrasing
related to syntactic structure? Can syntactic structure condition
phonological rules directly, or only indirectly via phrasing? How do
the properties of phrasal phonological rules differ from those of
lexical rules and of postlexical rules which apply across phrasal
domains? Where do so-called "phonetic rules" fit into the emerging
picture of the organization of the phonological component?
The reason these questions are up in the air is that several recent
developments have made untenable the hitherto standard picture of the
organization of phonology. According to this standard picture, the
rules of the phonological component map underlying representations
onto phonetic representations, which encode the linguistically
determined aspects of pronunciation; phonetic representations are in
turn related to the observed speech signal by largely universal rules
of phonetic implementation. One reason why this view bears rethinking
is that the theory of Lexical Phonology (Kiparsky 1982, 1985; Mohanan
1982) posits the existence of a linguistically significant
intermediate level, the level of lexical representation. The rules
which map underlying representations onto lexical representations turn
out to have very different properties from the rules which map lexical
representations onto phonetic representations. Secondly, research in
phonetics (Liberman and Pierrehumbert, 1984) suggests that there exist
language-particular context-sensitive rules which manipulate low-level
continuously-valued parameters of the sort assumed to be
nonphonological in character. Third, studies of connected speech
(Selkirk, 1984) have led to the postulation of a prosodic hierarchy
which governs the application of phonological processes to
combinations of words.
These were originally separate lines of investigation, but Poser
and Kiparsky are finding that considerations from all three converge
in a surprising way: there appears to be a fairly clear-cut division
of postlexical rules onto two types, "phrasal" and "phonetic" rules,
which differ with respect to conditioning, domain, and discreteness as
follows:
PHRASAL RULES PHONETIC RULES
o subject to morphological-lexical o subject to phonological
conditioning conditioning only
o restricted to minor phrases o applicable also in larger
prosodic units
o manipulate discrete feature o manipulate continuous values
values
Table 1. A possible general typology of postlexical rules.
The same typology appears to extend to cliticization processes as
well.
We are currently investigating the possibility of assigning the two
types of postlexical rules to different modules of grammar, and
explaining their properties by the principles of those modules.
Formal Properties of Rules and Representations
Underspecification and Constraints on Rules. One of the basic ideas
of Lexical Phonology is that lexical representations are incompletely
specified and receive their nondistinctive feature specifications from
the phonological rules of the language and from universal default
rules. Recently, Kiparsky has explored the possibility that this
underspecified character of lexical representations explains certain
well-known properties of phonological rules which have so far been
accounted for by means of a range of unrelated constraints. One such
property is the restriction of rules to "derived environments" (the
"Strict Cycle Condition"). Another is the commonly encountered
failure of rules to apply if the undergoing segment is in a branching
constituent ("C-command"). Both are derivable from the proper
formulation of underspecification and the principles governing the
application of default rules. This makes it possible to impose
significant constraints on the role of syntactic information in phrase
phonology.
Underspecification and Overgeneralization. A tough problem for
linguistic theory is how learners infer abstract grammatical
structures and prune overly-general rules without explicit negative
information (i.e., without explicit correction). Marcy Macken has
developed an account of phonological acquisition that promises to
solve this long-standing puzzle. Her model distinguishes formal
(algebraic) structures of phonological representations, semantic
(particularly stochastic and geometric) properties of phonetic
interpretation, and the nonformal informational structures across time
in the environment. This has lead to an investigation of the role of
underspecification and default mechanisms in the overall organization
of the phonological grammar and consideration of constraints on the
formal system that come, not from properties of the abstract system,
but from properties of its extensional system.
Rules and Representation. Poser has been continuing to work on a
theory of phonological rules. This effort is intended both to
establish a more highly constrained system than has hitherto been
available, based upon general principles rather than ad hoc
constraints, and to provide a conceptual analysis and formalization of
the relevant notions. Recent results include a unified account of the
class of phenomena involving exempt peripheral elements, which
constrains the exempt material to single peripheral constituents
(Poser, 1986b), and work on the role of constituency in phonological
representations (Poser, 1986a). The latter bears on the relationship
between phonological representations and phonological rules and
especially on the way in which phonological representations transmit
information. The central point is that the motivated phonological
representation of stress permits the transmission of information about
the morphological structure that would otherwise be prohibited.
References
Kiparsky, P. 1985. Some Consequences of Lexical Phonology. In Colin
Ewen (ed.), Phonology Yearbook, Vol. II. Cambridge University Press.
Kiparsky, P. 1982. Lexical Morphology and Phonology. In I.-S. Yang
(ed.), Linguistics in the Morning Calm. Seoul: Hanshin.
Liberman, M. and Pierrehumbert, J. 1984. Intonational Invariance
Under Changes in Pitch Range and Length. In Mark Aronoff and Richard
Dehrle (eds.), Language Sound Structure. Cambridge, MA: MIT Press.
Mohanan, K. P. 1982. Lexical Phonology. Thesis, MIT. Reproduced by
Indiana University Linguistics Club.
Poser, W. (a). 1986. Diyari Stress, Metrical Structure Assignment,
and Metrical Representation. Fifth West Coast Conference on Formal
Linguistics, University of Washington, Seattle, Washington, 22 March
1986.
Poser, W. (b). 1986. Invisibility. GLOW Colloquium, Girona, Spain, 8
April 1986.
Selkirk, E. 1984. Phonology and Syntax: The Relation between Sound
and Structure. Cambridge, MA: MIT Press.
Withgott, M. and Halvorsen, P.-K. 1984. Morphological Constraints on
Scandinavian Tone Accent. Report No. CSLI-84-11.
Withgott, M. and Halvorsen, P.-K. To appear. Phonetics and
Phonological Conditions Bearing on the Representation of East
Norwegian Accent. In N. Smith and H. van der Hullot (eds.),
Autosegmental Studies on Pitch Accent. Dordrecht: Foris.
end of part 5 of 7
-------
∂15-May-86 2136 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 21:36:47 PDT
Date: Thu 15 May 86 16:12:40-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FINITE STATE MORPHOLOGY (FSM)
Lauri Karttunen
Project Participants: John Bear, Lauri Karttunen (Project Leader),
Ronald Kaplan, Martin Kay, Bill Poser,
Kimmo Koskenniemi (by correspondence),
Mark Johnson
The basis for most of the work within the FSM group is the
observation that phonological rules can be converted to finite state
transducers. A transducer is an automaton with two input/output
heads. Such machines are computationally very efficient and their
efficiency can be further improved by merging several transducers into
a single one. Another benefit is that the system is bidirectional: it
can be used either to relate a surface string to a set of possible
lexical counterparts or to compute all the possible surface
realizations of a sequence of lexical representations. The conversion
of phonological rule systems to automata rests on elementary
operations of finite state machines: union, intersection,
complementation, determinization, and minimization. In order for the
conversion to be feasible practically, the algorithms for these basic
operations must be implemented very efficiently because the size of
the automata that need to be manipulated can grow very large even if
the ultimate outcome is compact.
Kaplan and Kay have worked for several years to produce the basic
set of tools for this type of computational phonology, and are now
very close to completion. In the last few months, Kaplan has
re-implemented many parts of his FSM package to increase its
efficiency; certain time-consuming tasks, such as determinization, can
now be performed in a fraction of the time they used to take. Using
an earlier version of this package, Koskenniemi has completed the
first version of a rule compiler that takes a set of two-level rules
and produces the set of corresponding automata for a bidirectional
analyzer/generator.
Because of these technological advances, computational linguistics,
which for a very long time has been preoccupied with syntax and
semantics, has finally made contact with phonology and morphology.
The task for the immediate future is to make the new facilities
generally available and to publicize their existence. To this end, we
organized a successful workshop on this topic at CSLI last summer.
Bear has implemented a new morphological analyzer in PROLOG. Like
its predecessor, Bear's new analyzer is based on Koskenniemi's
two-level model. It regards phonological rules as constraints between
lexical and surface realizations of morphemes and provides a formalism
(less general than Koskenniemi's) for expressing simple two-level
rules. Unlike most other implementations, the analyzer uses these
rules directly, rather than the corresponding finite state
transducers. Thus, the user avoids the labor of expressing
constraints in the form of automata. Another characteristic of the
analyzer is that word-internal syntax is handled by means of phrase
structure rules augmented with attribute-value matrices.
The emphasis of the work done so far has been on concatenative,
segmental phonology. Work in progress extends the approach in new
directions. Kay has worked out a multi-tiered finite state analysis
of Arabic morphology; Mark Johnson has provided an account of tone in
Kikuyu.
The computational work on automata also appears to be relevant
within the context of the project on Foundations of Grammar and other
CSLI projects which are exploring the notion of unification. As
William Rounds and Ronald Kaplan have pointed out, directed graphs can
be viewed as finite state machines. From this point of view,
unification of feature value matrices is analogous to determinizing
the union of two automata. We will investigate whether this
observation has some practical value.
---------------------
JAPANESE SYNTAX WORKSHOP
The second in a series of three workshops on Japanese Syntax was
held at CSLI on March 7 - 9, 1986. The series is being funded by the
System Development Foundation, and includes participants from
institutions throughout the United States and Japan.
For the second workshop, syntax was broadly construed as covering
also discourse phenomena and the interface between morphology and
syntax. Discourse and morphology are of considerable theoretical
interest at present, and both are of particular interest in the case
of Japanese. Discourse factors are intimately entangled with Japanese
syntax -- in the overt marking of topics and the discourse-level
interpretation of reflexives, for example -- and there is a long
tradition of work in this area by scholars such as Mikami Akira,
Susumu Kuno, and John Hinds. Morphosyntax is of interest because of
the large role played in Japanese by derivational morphology; at
present different theories assign different roles to the morphology,
and some interesting work was presented concerning the different
frameworks.
Several theoretical orientations were represented in the syntax
papers, including Government Binding Theory, Lexical Functional
Grammar, and Generalized Phrase Structure Grammar. Similarly, the
discourse paper represented Kuno's functional approach, Grosz's
centering framework, and Kamp's Discourse Representation Theory, with
commentary by Hinds, a representative of the conversational analysis
approach. This confrontation of syntactic and discourse based
approaches resulted in intense discussions of whether the phenomena in
questions were best accounted for in terms of syntactic structure or
as a result of discourse factors and of the controversial role played
by structural configuration.
Participants felt that the quality of papers was high, and that
there had been ample discussion of the issues raised. They plan to
publish their papers and a summary of the discussion in a forthcoming
CSLI volume.
---------------------
CSLI POSTDOCTORAL FELLOWS
JEAN MARK GAWRON
After receiving his PhD in Linguistics from UC Berkeley, Gawron
accepted a postdoctoral fellowship at the University of Edinburgh to
work with Henry Thompson and others interested in artificial
intelligence. There he participated in a reading group on situation
semantics and wrote a paper on the status of types in situation
theory.
At CSLI, he has embarked on two bodies of research which he hopes
will reach a convergence point in some work on the semantics of
prepositions. The first is a continuation of his work on situation
theory and situation semantics which includes a sequel to his types
paper called "Types, Parameterized Objects and Information".
Situation theory is the enterprise of laying down the axiomatic
foundations of situation semantics; thus, he feels, a "complete"
situation theory ought to bear much the same relation to situation
semantics that set theory bears to Montague semantics. In this paper
Gawron proposes some new axioms, discusses their relative strengths
and their relationship to other axioms proposed (in particular) by
Barwise and Cooper, and suggests adopting a still somewhat
controversial proposal of Carl Pollard's. Several issues raised in
this paper became the focus of a number of meetings of the STASS
group.
He has also written (and delivered at this year's Berkeley
Linguistic Society Meeting) a paper called "Clefts, Discourse
Representations, and Situation Semantics". This paper proposed a
treatment of some well-known presuppositional properties of it-clefts
("It was Maria that John loved"), and related them to wh-clefts ("The
one John loved was Maria"). It did this in the context of a somewhat
altered situation semantics, proposing a view of linguistic meaning
that diverged slightly from the published accounts, and offering in
return what was hopefully a general framework for handling
conversational implicature or presupposition.
Gawron's second body of research concerns prepositions. When he
arrived at CSLI, he expected to continue some research he had begun on
preposition meanings, intending particularly to apply them to
morphemes in other languages that did semantically analogous work
(prefixes in Polish and Hungarian). He now doubts some of the basic
hypotheses of that work, and says he has instead found himself
"backing into the lexical semantics", reconsidering some of the
semantic assumptions he had made in "Situations and Prepositions."
This has led in turn to "resurrecting some of the frame-based lexical
representations in my dissertation, and to various discussions about
that work with members of the Lexical group." He has found
particularly valuable the work that Paul Kiparsky is doing on lexical
representations, grammatical relations, and morphology. The result is
that his view on how lexical representations and morphological rules
should interact has changed considerably from that advanced in his
dissertation, and, ".. as a kind of side effect, my views on
prepositions have changed as well". Some of these changes are
presented in a paper entitled "Valence Structure Preservation and
Demotion" (delivered at the 22nd Chicago Linguistics Society Meeting).
In summary, he says, "The direct result of both of these lines of
research is that I have had to revise many of the particulars of an
account of the semantics of prepositions that I gave in the types
paper written before I came here. That in turn prompted
reconsideration of some of the basic claims of the paper, which I am
now prepared to ignominiously abandon. So the current work in
progress is a return to English prepositions, with some recanting and
some canting again in different directions".
HELENE KIRCHNER
While still a graduate student in the Department of Computer Science
at the University of Nancy, Kirchner won a position at the Centre
National de la Researche Scientifique in Jean-Pierre Jouannaud's
research group. Jouannaud had been following the work of Joseph
Goguen and Jose Meseguer (see lead article), and encouraged her to
apply for a CSLI postdoctoral fellowship to facilitate an exchange of
ideas.
Kirchner is interested in developing programming languages with
advanced validation tools. In many applications of computer science
such as aeronautics and the control of complex processes, the problem
of software fallibility is crucial; validation of the correctness of
these huge programs requires programming languages capable of
providing high level specifications and verification tools.
It made sense to begin her work with a programming language that
already had a clear semantics and inference mechanism, and, in
particular, with Goguen and Meseguer's OBJ. OBJ is a high level
specification language for algebraic abstract data types; it has a
clean algebraic semantics based on initial "order-sorted" algebras
(algebras whose carriers are composed of different sorts with possible
inclusions between them). The theory of order-sorted algebras
supports function polymorphism and overloading, error definition and
error recovery, multiple inheritance and sort constraints, which
permit the definition of what would otherwise be partial functions as
total functions on equationally defined subdomains. The basic
entities are objects described by sorts, functions, and equations.
During her stay at CSLI she studied, specified, and implemented a
new version of the inference mechanism for OBJ. Based on order-sorted
rewriting, her implementation is a generalization of standard
rewriting taking into account the inclusion relation on sorts. It
preserves the characteristic features of the language such as
modularity, error handling and error recovery, and sort constraints.
The next step will be to provide validation tools for OBJ or more
generally for equational programming languages -- for instance, tools
that allow the user to specify that the importation of a previously
defined object inside his current program does not modify the behavior
of the imported object. That issue is in general related to theorem
proving in equational theories for which the formalism of term
rewriting systems is especially suitable and efficient.
While OBJ was designed for context independent computation,
Kirchner feels that her work provides a first step to the development
of validation tools for context dependent languages. She feels (along
with Goguen and Meseguer) that situation theory provides a new logic
that is well suited to providing the semantics of such languages, and
she expects to turn to that application when her work on OBJ is
completed.
ED ZALTA
Zalta received his PhD in Philosophy from the University of
Massachusetts and then taught a year each in the Philosophy
Departments of the University of Auckland in New Zealand and at Rice
University before coming to CSLI. His interest was in foundational
issues in metaphysics and the philosophy of language, and the basic
conclusions he had reached seemed similar to some of those of Jon
Barwise and John Perry.
His major effort at CSLI has been to extend the axiomatic theory of
objects and relations developed in his book, "Abstract Objects"; for
example, he has extended his theory of worlds to account for moments
of time and to explain the structural similarities between worlds and
times. And he has designed a comprehensive intensional logic which
avoids the basic problems of Montague's logic. These results have
been incorporated into a new manuscript entitled "Intensional Logic
and the Metaphysics of Intentionality". Other papers he has written
during his fellowship include: "Referring to Fictional Characters: A
Reply", "Logical and Analytic Truths Which Aren't Necessary", and
"Lambert, Mally, and the Principle of Independence". These have been
presented in talks at the Eastern and Pacific Division meetings of the
American Philosophical Association and at the Berkeley Cognitive
Science Seminar.
Zalta enjoys teaching and has taught three courses in Stanford's
Philosophy Department while at CSLI. In the spring of 1985, he held a
seminar on "Nonexistent Objects and the Semantics of Fiction". During
the autumn and winter quarter of the 85-85 academic year, he and
Julius Moravcsik conducted the core seminar in metaphysics and
epistemology, focusing on the nature of events. And in the winter
quarter of this year he taught an undergraduate course on the history
of philosophy from Descartes to Kant.
He has found CSLI to be "a place where you can maximize your
abilities in whatever discipline you're in -- there is always someone
around to answer your questions". He has discovered more applications
of his theoretical approach than he had originally anticipated, and
has learned what it takes to make the approach interesting to others.
---------------------
end of part 6 of 7
-------
∂15-May-86 2141 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 21:36:47 PDT
Date: Thu 15 May 86 16:12:40-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FINITE STATE MORPHOLOGY (FSM)
Lauri Karttunen
Project Participants: John Bear, Lauri Karttunen (Project Leader),
Ronald Kaplan, Martin Kay, Bill Poser,
Kimmo Koskenniemi (by correspondence),
Mark Johnson
The basis for most of the work within the FSM group is the
observation that phonological rules can be converted to finite state
transducers. A transducer is an automaton with two input/output
heads. Such machines are computationally very efficient and their
efficiency can be further improved by merging several transducers into
a single one. Another benefit is that the system is bidirectional: it
can be used either to relate a surface string to a set of possible
lexical counterparts or to compute all the possible surface
realizations of a sequence of lexical representations. The conversion
of phonological rule systems to automata rests on elementary
operations of finite state machines: union, intersection,
complementation, determinization, and minimization. In order for the
conversion to be feasible practically, the algorithms for these basic
operations must be implemented very efficiently because the size of
the automata that need to be manipulated can grow very large even if
the ultimate outcome is compact.
Kaplan and Kay have worked for several years to produce the basic
set of tools for this type of computational phonology, and are now
very close to completion. In the last few months, Kaplan has
re-implemented many parts of his FSM package to increase its
efficiency; certain time-consuming tasks, such as determinization, can
now be performed in a fraction of the time they used to take. Using
an earlier version of this package, Koskenniemi has completed the
first version of a rule compiler that takes a set of two-level rules
and produces the set of corresponding automata for a bidirectional
analyzer/generator.
Because of these technological advances, computational linguistics,
which for a very long time has been preoccupied with syntax and
semantics, has finally made contact with phonology and morphology.
The task for the immediate future is to make the new facilities
generally available and to publicize their existence. To this end, we
organized a successful workshop on this topic at CSLI last summer.
Bear has implemented a new morphological analyzer in PROLOG. Like
its predecessor, Bear's new analyzer is based on Koskenniemi's
two-level model. It regards phonological rules as constraints between
lexical and surface realizations of morphemes and provides a formalism
(less general than Koskenniemi's) for expressing simple two-level
rules. Unlike most other implementations, the analyzer uses these
rules directly, rather than the corresponding finite state
transducers. Thus, the user avoids the labor of expressing
constraints in the form of automata. Another characteristic of the
analyzer is that word-internal syntax is handled by means of phrase
structure rules augmented with attribute-value matrices.
The emphasis of the work done so far has been on concatenative,
segmental phonology. Work in progress extends the approach in new
directions. Kay has worked out a multi-tiered finite state analysis
of Arabic morphology; Mark Johnson has provided an account of tone in
Kikuyu.
The computational work on automata also appears to be relevant
within the context of the project on Foundations of Grammar and other
CSLI projects which are exploring the notion of unification. As
William Rounds and Ronald Kaplan have pointed out, directed graphs can
be viewed as finite state machines. From this point of view,
unification of feature value matrices is analogous to determinizing
the union of two automata. We will investigate whether this
observation has some practical value.
---------------------
JAPANESE SYNTAX WORKSHOP
The second in a series of three workshops on Japanese Syntax was
held at CSLI on March 7 - 9, 1986. The series is being funded by the
System Development Foundation, and includes participants from
institutions throughout the United States and Japan.
For the second workshop, syntax was broadly construed as covering
also discourse phenomena and the interface between morphology and
syntax. Discourse and morphology are of considerable theoretical
interest at present, and both are of particular interest in the case
of Japanese. Discourse factors are intimately entangled with Japanese
syntax -- in the overt marking of topics and the discourse-level
interpretation of reflexives, for example -- and there is a long
tradition of work in this area by scholars such as Mikami Akira,
Susumu Kuno, and John Hinds. Morphosyntax is of interest because of
the large role played in Japanese by derivational morphology; at
present different theories assign different roles to the morphology,
and some interesting work was presented concerning the different
frameworks.
Several theoretical orientations were represented in the syntax
papers, including Government Binding Theory, Lexical Functional
Grammar, and Generalized Phrase Structure Grammar. Similarly, the
discourse paper represented Kuno's functional approach, Grosz's
centering framework, and Kamp's Discourse Representation Theory, with
commentary by Hinds, a representative of the conversational analysis
approach. This confrontation of syntactic and discourse based
approaches resulted in intense discussions of whether the phenomena in
questions were best accounted for in terms of syntactic structure or
as a result of discourse factors and of the controversial role played
by structural configuration.
Participants felt that the quality of papers was high, and that
there had been ample discussion of the issues raised. They plan to
publish their papers and a summary of the discussion in a forthcoming
CSLI volume.
---------------------
CSLI POSTDOCTORAL FELLOWS
JEAN MARK GAWRON
After receiving his PhD in Linguistics from UC Berkeley, Gawron
accepted a postdoctoral fellowship at the University of Edinburgh to
work with Henry Thompson and others interested in artificial
intelligence. There he participated in a reading group on situation
semantics and wrote a paper on the status of types in situation
theory.
At CSLI, he has embarked on two bodies of research which he hopes
will reach a convergence point in some work on the semantics of
prepositions. The first is a continuation of his work on situation
theory and situation semantics which includes a sequel to his types
paper called "Types, Parameterized Objects and Information".
Situation theory is the enterprise of laying down the axiomatic
foundations of situation semantics; thus, he feels, a "complete"
situation theory ought to bear much the same relation to situation
semantics that set theory bears to Montague semantics. In this paper
Gawron proposes some new axioms, discusses their relative strengths
and their relationship to other axioms proposed (in particular) by
Barwise and Cooper, and suggests adopting a still somewhat
controversial proposal of Carl Pollard's. Several issues raised in
this paper became the focus of a number of meetings of the STASS
group.
He has also written (and delivered at this year's Berkeley
Linguistic Society Meeting) a paper called "Clefts, Discourse
Representations, and Situation Semantics". This paper proposed a
treatment of some well-known presuppositional properties of it-clefts
("It was Maria that John loved"), and related them to wh-clefts ("The
one John loved was Maria"). It did this in the context of a somewhat
altered situation semantics, proposing a view of linguistic meaning
that diverged slightly from the published accounts, and offering in
return what was hopefully a general framework for handling
conversational implicature or presupposition.
Gawron's second body of research concerns prepositions. When he
arrived at CSLI, he expected to continue some research he had begun on
preposition meanings, intending particularly to apply them to
morphemes in other languages that did semantically analogous work
(prefixes in Polish and Hungarian). He now doubts some of the basic
hypotheses of that work, and says he has instead found himself
"backing into the lexical semantics", reconsidering some of the
semantic assumptions he had made in "Situations and Prepositions."
This has led in turn to "resurrecting some of the frame-based lexical
representations in my dissertation, and to various discussions about
that work with members of the Lexical group." He has found
particularly valuable the work that Paul Kiparsky is doing on lexical
representations, grammatical relations, and morphology. The result is
that his view on how lexical representations and morphological rules
should interact has changed considerably from that advanced in his
dissertation, and, ".. as a kind of side effect, my views on
prepositions have changed as well". Some of these changes are
presented in a paper entitled "Valence Structure Preservation and
Demotion" (delivered at the 22nd Chicago Linguistics Society Meeting).
In summary, he says, "The direct result of both of these lines of
research is that I have had to revise many of the particulars of an
account of the semantics of prepositions that I gave in the types
paper written before I came here. That in turn prompted
reconsideration of some of the basic claims of the paper, which I am
now prepared to ignominiously abandon. So the current work in
progress is a return to English prepositions, with some recanting and
some canting again in different directions".
HELENE KIRCHNER
While still a graduate student in the Department of Computer Science
at the University of Nancy, Kirchner won a position at the Centre
National de la Researche Scientifique in Jean-Pierre Jouannaud's
research group. Jouannaud had been following the work of Joseph
Goguen and Jose Meseguer (see lead article), and encouraged her to
apply for a CSLI postdoctoral fellowship to facilitate an exchange of
ideas.
Kirchner is interested in developing programming languages with
advanced validation tools. In many applications of computer science
such as aeronautics and the control of complex processes, the problem
of software fallibility is crucial; validation of the correctness of
these huge programs requires programming languages capable of
providing high level specifications and verification tools.
It made sense to begin her work with a programming language that
already had a clear semantics and inference mechanism, and, in
particular, with Goguen and Meseguer's OBJ. OBJ is a high level
specification language for algebraic abstract data types; it has a
clean algebraic semantics based on initial "order-sorted" algebras
(algebras whose carriers are composed of different sorts with possible
inclusions between them). The theory of order-sorted algebras
supports function polymorphism and overloading, error definition and
error recovery, multiple inheritance and sort constraints, which
permit the definition of what would otherwise be partial functions as
total functions on equationally defined subdomains. The basic
entities are objects described by sorts, functions, and equations.
During her stay at CSLI she studied, specified, and implemented a
new version of the inference mechanism for OBJ. Based on order-sorted
rewriting, her implementation is a generalization of standard
rewriting taking into account the inclusion relation on sorts. It
preserves the characteristic features of the language such as
modularity, error handling and error recovery, and sort constraints.
The next step will be to provide validation tools for OBJ or more
generally for equational programming languages -- for instance, tools
that allow the user to specify that the importation of a previously
defined object inside his current program does not modify the behavior
of the imported object. That issue is in general related to theorem
proving in equational theories for which the formalism of term
rewriting systems is especially suitable and efficient.
While OBJ was designed for context independent computation,
Kirchner feels that her work provides a first step to the development
of validation tools for context dependent languages. She feels (along
with Goguen and Meseguer) that situation theory provides a new logic
that is well suited to providing the semantics of such languages, and
she expects to turn to that application when her work on OBJ is
completed.
ED ZALTA
Zalta received his PhD in Philosophy from the University of
Massachusetts and then taught a year each in the Philosophy
Departments of the University of Auckland in New Zealand and at Rice
University before coming to CSLI. His interest was in foundational
issues in metaphysics and the philosophy of language, and the basic
conclusions he had reached seemed similar to some of those of Jon
Barwise and John Perry.
His major effort at CSLI has been to extend the axiomatic theory of
objects and relations developed in his book, "Abstract Objects"; for
example, he has extended his theory of worlds to account for moments
of time and to explain the structural similarities between worlds and
times. And he has designed a comprehensive intensional logic which
avoids the basic problems of Montague's logic. These results have
been incorporated into a new manuscript entitled "Intensional Logic
and the Metaphysics of Intentionality". Other papers he has written
during his fellowship include: "Referring to Fictional Characters: A
Reply", "Logical and Analytic Truths Which Aren't Necessary", and
"Lambert, Mally, and the Principle of Independence". These have been
presented in talks at the Eastern and Pacific Division meetings of the
American Philosophical Association and at the Berkeley Cognitive
Science Seminar.
Zalta enjoys teaching and has taught three courses in Stanford's
Philosophy Department while at CSLI. In the spring of 1985, he held a
seminar on "Nonexistent Objects and the Semantics of Fiction". During
the autumn and winter quarter of the 85-85 academic year, he and
Julius Moravcsik conducted the core seminar in metaphysics and
epistemology, focusing on the nature of events. And in the winter
quarter of this year he taught an undergraduate course on the history
of philosophy from Descartes to Kant.
He has found CSLI to be "a place where you can maximize your
abilities in whatever discipline you're in -- there is always someone
around to answer your questions". He has discovered more applications
of his theoretical approach than he had originally anticipated, and
has learned what it takes to make the approach interesting to others.
---------------------
end of part 6 of 7
-------
∂15-May-86 2147 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 21:36:47 PDT
Date: Thu 15 May 86 16:12:40-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FINITE STATE MORPHOLOGY (FSM)
Lauri Karttunen
Project Participants: John Bear, Lauri Karttunen (Project Leader),
Ronald Kaplan, Martin Kay, Bill Poser,
Kimmo Koskenniemi (by correspondence),
Mark Johnson
The basis for most of the work within the FSM group is the
observation that phonological rules can be converted to finite state
transducers. A transducer is an automaton with two input/output
heads. Such machines are computationally very efficient and their
efficiency can be further improved by merging several transducers into
a single one. Another benefit is that the system is bidirectional: it
can be used either to relate a surface string to a set of possible
lexical counterparts or to compute all the possible surface
realizations of a sequence of lexical representations. The conversion
of phonological rule systems to automata rests on elementary
operations of finite state machines: union, intersection,
complementation, determinization, and minimization. In order for the
conversion to be feasible practically, the algorithms for these basic
operations must be implemented very efficiently because the size of
the automata that need to be manipulated can grow very large even if
the ultimate outcome is compact.
Kaplan and Kay have worked for several years to produce the basic
set of tools for this type of computational phonology, and are now
very close to completion. In the last few months, Kaplan has
re-implemented many parts of his FSM package to increase its
efficiency; certain time-consuming tasks, such as determinization, can
now be performed in a fraction of the time they used to take. Using
an earlier version of this package, Koskenniemi has completed the
first version of a rule compiler that takes a set of two-level rules
and produces the set of corresponding automata for a bidirectional
analyzer/generator.
Because of these technological advances, computational linguistics,
which for a very long time has been preoccupied with syntax and
semantics, has finally made contact with phonology and morphology.
The task for the immediate future is to make the new facilities
generally available and to publicize their existence. To this end, we
organized a successful workshop on this topic at CSLI last summer.
Bear has implemented a new morphological analyzer in PROLOG. Like
its predecessor, Bear's new analyzer is based on Koskenniemi's
two-level model. It regards phonological rules as constraints between
lexical and surface realizations of morphemes and provides a formalism
(less general than Koskenniemi's) for expressing simple two-level
rules. Unlike most other implementations, the analyzer uses these
rules directly, rather than the corresponding finite state
transducers. Thus, the user avoids the labor of expressing
constraints in the form of automata. Another characteristic of the
analyzer is that word-internal syntax is handled by means of phrase
structure rules augmented with attribute-value matrices.
The emphasis of the work done so far has been on concatenative,
segmental phonology. Work in progress extends the approach in new
directions. Kay has worked out a multi-tiered finite state analysis
of Arabic morphology; Mark Johnson has provided an account of tone in
Kikuyu.
The computational work on automata also appears to be relevant
within the context of the project on Foundations of Grammar and other
CSLI projects which are exploring the notion of unification. As
William Rounds and Ronald Kaplan have pointed out, directed graphs can
be viewed as finite state machines. From this point of view,
unification of feature value matrices is analogous to determinizing
the union of two automata. We will investigate whether this
observation has some practical value.
---------------------
JAPANESE SYNTAX WORKSHOP
The second in a series of three workshops on Japanese Syntax was
held at CSLI on March 7 - 9, 1986. The series is being funded by the
System Development Foundation, and includes participants from
institutions throughout the United States and Japan.
For the second workshop, syntax was broadly construed as covering
also discourse phenomena and the interface between morphology and
syntax. Discourse and morphology are of considerable theoretical
interest at present, and both are of particular interest in the case
of Japanese. Discourse factors are intimately entangled with Japanese
syntax -- in the overt marking of topics and the discourse-level
interpretation of reflexives, for example -- and there is a long
tradition of work in this area by scholars such as Mikami Akira,
Susumu Kuno, and John Hinds. Morphosyntax is of interest because of
the large role played in Japanese by derivational morphology; at
present different theories assign different roles to the morphology,
and some interesting work was presented concerning the different
frameworks.
Several theoretical orientations were represented in the syntax
papers, including Government Binding Theory, Lexical Functional
Grammar, and Generalized Phrase Structure Grammar. Similarly, the
discourse paper represented Kuno's functional approach, Grosz's
centering framework, and Kamp's Discourse Representation Theory, with
commentary by Hinds, a representative of the conversational analysis
approach. This confrontation of syntactic and discourse based
approaches resulted in intense discussions of whether the phenomena in
questions were best accounted for in terms of syntactic structure or
as a result of discourse factors and of the controversial role played
by structural configuration.
Participants felt that the quality of papers was high, and that
there had been ample discussion of the issues raised. They plan to
publish their papers and a summary of the discussion in a forthcoming
CSLI volume.
---------------------
CSLI POSTDOCTORAL FELLOWS
JEAN MARK GAWRON
After receiving his PhD in Linguistics from UC Berkeley, Gawron
accepted a postdoctoral fellowship at the University of Edinburgh to
work with Henry Thompson and others interested in artificial
intelligence. There he participated in a reading group on situation
semantics and wrote a paper on the status of types in situation
theory.
At CSLI, he has embarked on two bodies of research which he hopes
will reach a convergence point in some work on the semantics of
prepositions. The first is a continuation of his work on situation
theory and situation semantics which includes a sequel to his types
paper called "Types, Parameterized Objects and Information".
Situation theory is the enterprise of laying down the axiomatic
foundations of situation semantics; thus, he feels, a "complete"
situation theory ought to bear much the same relation to situation
semantics that set theory bears to Montague semantics. In this paper
Gawron proposes some new axioms, discusses their relative strengths
and their relationship to other axioms proposed (in particular) by
Barwise and Cooper, and suggests adopting a still somewhat
controversial proposal of Carl Pollard's. Several issues raised in
this paper became the focus of a number of meetings of the STASS
group.
He has also written (and delivered at this year's Berkeley
Linguistic Society Meeting) a paper called "Clefts, Discourse
Representations, and Situation Semantics". This paper proposed a
treatment of some well-known presuppositional properties of it-clefts
("It was Maria that John loved"), and related them to wh-clefts ("The
one John loved was Maria"). It did this in the context of a somewhat
altered situation semantics, proposing a view of linguistic meaning
that diverged slightly from the published accounts, and offering in
return what was hopefully a general framework for handling
conversational implicature or presupposition.
Gawron's second body of research concerns prepositions. When he
arrived at CSLI, he expected to continue some research he had begun on
preposition meanings, intending particularly to apply them to
morphemes in other languages that did semantically analogous work
(prefixes in Polish and Hungarian). He now doubts some of the basic
hypotheses of that work, and says he has instead found himself
"backing into the lexical semantics", reconsidering some of the
semantic assumptions he had made in "Situations and Prepositions."
This has led in turn to "resurrecting some of the frame-based lexical
representations in my dissertation, and to various discussions about
that work with members of the Lexical group." He has found
particularly valuable the work that Paul Kiparsky is doing on lexical
representations, grammatical relations, and morphology. The result is
that his view on how lexical representations and morphological rules
should interact has changed considerably from that advanced in his
dissertation, and, ".. as a kind of side effect, my views on
prepositions have changed as well". Some of these changes are
presented in a paper entitled "Valence Structure Preservation and
Demotion" (delivered at the 22nd Chicago Linguistics Society Meeting).
In summary, he says, "The direct result of both of these lines of
research is that I have had to revise many of the particulars of an
account of the semantics of prepositions that I gave in the types
paper written before I came here. That in turn prompted
reconsideration of some of the basic claims of the paper, which I am
now prepared to ignominiously abandon. So the current work in
progress is a return to English prepositions, with some recanting and
some canting again in different directions".
HELENE KIRCHNER
While still a graduate student in the Department of Computer Science
at the University of Nancy, Kirchner won a position at the Centre
National de la Researche Scientifique in Jean-Pierre Jouannaud's
research group. Jouannaud had been following the work of Joseph
Goguen and Jose Meseguer (see lead article), and encouraged her to
apply for a CSLI postdoctoral fellowship to facilitate an exchange of
ideas.
Kirchner is interested in developing programming languages with
advanced validation tools. In many applications of computer science
such as aeronautics and the control of complex processes, the problem
of software fallibility is crucial; validation of the correctness of
these huge programs requires programming languages capable of
providing high level specifications and verification tools.
It made sense to begin her work with a programming language that
already had a clear semantics and inference mechanism, and, in
particular, with Goguen and Meseguer's OBJ. OBJ is a high level
specification language for algebraic abstract data types; it has a
clean algebraic semantics based on initial "order-sorted" algebras
(algebras whose carriers are composed of different sorts with possible
inclusions between them). The theory of order-sorted algebras
supports function polymorphism and overloading, error definition and
error recovery, multiple inheritance and sort constraints, which
permit the definition of what would otherwise be partial functions as
total functions on equationally defined subdomains. The basic
entities are objects described by sorts, functions, and equations.
During her stay at CSLI she studied, specified, and implemented a
new version of the inference mechanism for OBJ. Based on order-sorted
rewriting, her implementation is a generalization of standard
rewriting taking into account the inclusion relation on sorts. It
preserves the characteristic features of the language such as
modularity, error handling and error recovery, and sort constraints.
The next step will be to provide validation tools for OBJ or more
generally for equational programming languages -- for instance, tools
that allow the user to specify that the importation of a previously
defined object inside his current program does not modify the behavior
of the imported object. That issue is in general related to theorem
proving in equational theories for which the formalism of term
rewriting systems is especially suitable and efficient.
While OBJ was designed for context independent computation,
Kirchner feels that her work provides a first step to the development
of validation tools for context dependent languages. She feels (along
with Goguen and Meseguer) that situation theory provides a new logic
that is well suited to providing the semantics of such languages, and
she expects to turn to that application when her work on OBJ is
completed.
ED ZALTA
Zalta received his PhD in Philosophy from the University of
Massachusetts and then taught a year each in the Philosophy
Departments of the University of Auckland in New Zealand and at Rice
University before coming to CSLI. His interest was in foundational
issues in metaphysics and the philosophy of language, and the basic
conclusions he had reached seemed similar to some of those of Jon
Barwise and John Perry.
His major effort at CSLI has been to extend the axiomatic theory of
objects and relations developed in his book, "Abstract Objects"; for
example, he has extended his theory of worlds to account for moments
of time and to explain the structural similarities between worlds and
times. And he has designed a comprehensive intensional logic which
avoids the basic problems of Montague's logic. These results have
been incorporated into a new manuscript entitled "Intensional Logic
and the Metaphysics of Intentionality". Other papers he has written
during his fellowship include: "Referring to Fictional Characters: A
Reply", "Logical and Analytic Truths Which Aren't Necessary", and
"Lambert, Mally, and the Principle of Independence". These have been
presented in talks at the Eastern and Pacific Division meetings of the
American Philosophical Association and at the Berkeley Cognitive
Science Seminar.
Zalta enjoys teaching and has taught three courses in Stanford's
Philosophy Department while at CSLI. In the spring of 1985, he held a
seminar on "Nonexistent Objects and the Semantics of Fiction". During
the autumn and winter quarter of the 85-85 academic year, he and
Julius Moravcsik conducted the core seminar in metaphysics and
epistemology, focusing on the nature of events. And in the winter
quarter of this year he taught an undergraduate course on the history
of philosophy from Descartes to Kant.
He has found CSLI to be "a place where you can maximize your
abilities in whatever discipline you're in -- there is always someone
around to answer your questions". He has discovered more applications
of his theoretical approach than he had originally anticipated, and
has learned what it takes to make the approach interesting to others.
---------------------
end of part 6 of 7
-------
∂15-May-86 2152 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 21:36:47 PDT
Date: Thu 15 May 86 16:12:40-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FINITE STATE MORPHOLOGY (FSM)
Lauri Karttunen
Project Participants: John Bear, Lauri Karttunen (Project Leader),
Ronald Kaplan, Martin Kay, Bill Poser,
Kimmo Koskenniemi (by correspondence),
Mark Johnson
The basis for most of the work within the FSM group is the
observation that phonological rules can be converted to finite state
transducers. A transducer is an automaton with two input/output
heads. Such machines are computationally very efficient and their
efficiency can be further improved by merging several transducers into
a single one. Another benefit is that the system is bidirectional: it
can be used either to relate a surface string to a set of possible
lexical counterparts or to compute all the possible surface
realizations of a sequence of lexical representations. The conversion
of phonological rule systems to automata rests on elementary
operations of finite state machines: union, intersection,
complementation, determinization, and minimization. In order for the
conversion to be feasible practically, the algorithms for these basic
operations must be implemented very efficiently because the size of
the automata that need to be manipulated can grow very large even if
the ultimate outcome is compact.
Kaplan and Kay have worked for several years to produce the basic
set of tools for this type of computational phonology, and are now
very close to completion. In the last few months, Kaplan has
re-implemented many parts of his FSM package to increase its
efficiency; certain time-consuming tasks, such as determinization, can
now be performed in a fraction of the time they used to take. Using
an earlier version of this package, Koskenniemi has completed the
first version of a rule compiler that takes a set of two-level rules
and produces the set of corresponding automata for a bidirectional
analyzer/generator.
Because of these technological advances, computational linguistics,
which for a very long time has been preoccupied with syntax and
semantics, has finally made contact with phonology and morphology.
The task for the immediate future is to make the new facilities
generally available and to publicize their existence. To this end, we
organized a successful workshop on this topic at CSLI last summer.
Bear has implemented a new morphological analyzer in PROLOG. Like
its predecessor, Bear's new analyzer is based on Koskenniemi's
two-level model. It regards phonological rules as constraints between
lexical and surface realizations of morphemes and provides a formalism
(less general than Koskenniemi's) for expressing simple two-level
rules. Unlike most other implementations, the analyzer uses these
rules directly, rather than the corresponding finite state
transducers. Thus, the user avoids the labor of expressing
constraints in the form of automata. Another characteristic of the
analyzer is that word-internal syntax is handled by means of phrase
structure rules augmented with attribute-value matrices.
The emphasis of the work done so far has been on concatenative,
segmental phonology. Work in progress extends the approach in new
directions. Kay has worked out a multi-tiered finite state analysis
of Arabic morphology; Mark Johnson has provided an account of tone in
Kikuyu.
The computational work on automata also appears to be relevant
within the context of the project on Foundations of Grammar and other
CSLI projects which are exploring the notion of unification. As
William Rounds and Ronald Kaplan have pointed out, directed graphs can
be viewed as finite state machines. From this point of view,
unification of feature value matrices is analogous to determinizing
the union of two automata. We will investigate whether this
observation has some practical value.
---------------------
JAPANESE SYNTAX WORKSHOP
The second in a series of three workshops on Japanese Syntax was
held at CSLI on March 7 - 9, 1986. The series is being funded by the
System Development Foundation, and includes participants from
institutions throughout the United States and Japan.
For the second workshop, syntax was broadly construed as covering
also discourse phenomena and the interface between morphology and
syntax. Discourse and morphology are of considerable theoretical
interest at present, and both are of particular interest in the case
of Japanese. Discourse factors are intimately entangled with Japanese
syntax -- in the overt marking of topics and the discourse-level
interpretation of reflexives, for example -- and there is a long
tradition of work in this area by scholars such as Mikami Akira,
Susumu Kuno, and John Hinds. Morphosyntax is of interest because of
the large role played in Japanese by derivational morphology; at
present different theories assign different roles to the morphology,
and some interesting work was presented concerning the different
frameworks.
Several theoretical orientations were represented in the syntax
papers, including Government Binding Theory, Lexical Functional
Grammar, and Generalized Phrase Structure Grammar. Similarly, the
discourse paper represented Kuno's functional approach, Grosz's
centering framework, and Kamp's Discourse Representation Theory, with
commentary by Hinds, a representative of the conversational analysis
approach. This confrontation of syntactic and discourse based
approaches resulted in intense discussions of whether the phenomena in
questions were best accounted for in terms of syntactic structure or
as a result of discourse factors and of the controversial role played
by structural configuration.
Participants felt that the quality of papers was high, and that
there had been ample discussion of the issues raised. They plan to
publish their papers and a summary of the discussion in a forthcoming
CSLI volume.
---------------------
CSLI POSTDOCTORAL FELLOWS
JEAN MARK GAWRON
After receiving his PhD in Linguistics from UC Berkeley, Gawron
accepted a postdoctoral fellowship at the University of Edinburgh to
work with Henry Thompson and others interested in artificial
intelligence. There he participated in a reading group on situation
semantics and wrote a paper on the status of types in situation
theory.
At CSLI, he has embarked on two bodies of research which he hopes
will reach a convergence point in some work on the semantics of
prepositions. The first is a continuation of his work on situation
theory and situation semantics which includes a sequel to his types
paper called "Types, Parameterized Objects and Information".
Situation theory is the enterprise of laying down the axiomatic
foundations of situation semantics; thus, he feels, a "complete"
situation theory ought to bear much the same relation to situation
semantics that set theory bears to Montague semantics. In this paper
Gawron proposes some new axioms, discusses their relative strengths
and their relationship to other axioms proposed (in particular) by
Barwise and Cooper, and suggests adopting a still somewhat
controversial proposal of Carl Pollard's. Several issues raised in
this paper became the focus of a number of meetings of the STASS
group.
He has also written (and delivered at this year's Berkeley
Linguistic Society Meeting) a paper called "Clefts, Discourse
Representations, and Situation Semantics". This paper proposed a
treatment of some well-known presuppositional properties of it-clefts
("It was Maria that John loved"), and related them to wh-clefts ("The
one John loved was Maria"). It did this in the context of a somewhat
altered situation semantics, proposing a view of linguistic meaning
that diverged slightly from the published accounts, and offering in
return what was hopefully a general framework for handling
conversational implicature or presupposition.
Gawron's second body of research concerns prepositions. When he
arrived at CSLI, he expected to continue some research he had begun on
preposition meanings, intending particularly to apply them to
morphemes in other languages that did semantically analogous work
(prefixes in Polish and Hungarian). He now doubts some of the basic
hypotheses of that work, and says he has instead found himself
"backing into the lexical semantics", reconsidering some of the
semantic assumptions he had made in "Situations and Prepositions."
This has led in turn to "resurrecting some of the frame-based lexical
representations in my dissertation, and to various discussions about
that work with members of the Lexical group." He has found
particularly valuable the work that Paul Kiparsky is doing on lexical
representations, grammatical relations, and morphology. The result is
that his view on how lexical representations and morphological rules
should interact has changed considerably from that advanced in his
dissertation, and, ".. as a kind of side effect, my views on
prepositions have changed as well". Some of these changes are
presented in a paper entitled "Valence Structure Preservation and
Demotion" (delivered at the 22nd Chicago Linguistics Society Meeting).
In summary, he says, "The direct result of both of these lines of
research is that I have had to revise many of the particulars of an
account of the semantics of prepositions that I gave in the types
paper written before I came here. That in turn prompted
reconsideration of some of the basic claims of the paper, which I am
now prepared to ignominiously abandon. So the current work in
progress is a return to English prepositions, with some recanting and
some canting again in different directions".
HELENE KIRCHNER
While still a graduate student in the Department of Computer Science
at the University of Nancy, Kirchner won a position at the Centre
National de la Researche Scientifique in Jean-Pierre Jouannaud's
research group. Jouannaud had been following the work of Joseph
Goguen and Jose Meseguer (see lead article), and encouraged her to
apply for a CSLI postdoctoral fellowship to facilitate an exchange of
ideas.
Kirchner is interested in developing programming languages with
advanced validation tools. In many applications of computer science
such as aeronautics and the control of complex processes, the problem
of software fallibility is crucial; validation of the correctness of
these huge programs requires programming languages capable of
providing high level specifications and verification tools.
It made sense to begin her work with a programming language that
already had a clear semantics and inference mechanism, and, in
particular, with Goguen and Meseguer's OBJ. OBJ is a high level
specification language for algebraic abstract data types; it has a
clean algebraic semantics based on initial "order-sorted" algebras
(algebras whose carriers are composed of different sorts with possible
inclusions between them). The theory of order-sorted algebras
supports function polymorphism and overloading, error definition and
error recovery, multiple inheritance and sort constraints, which
permit the definition of what would otherwise be partial functions as
total functions on equationally defined subdomains. The basic
entities are objects described by sorts, functions, and equations.
During her stay at CSLI she studied, specified, and implemented a
new version of the inference mechanism for OBJ. Based on order-sorted
rewriting, her implementation is a generalization of standard
rewriting taking into account the inclusion relation on sorts. It
preserves the characteristic features of the language such as
modularity, error handling and error recovery, and sort constraints.
The next step will be to provide validation tools for OBJ or more
generally for equational programming languages -- for instance, tools
that allow the user to specify that the importation of a previously
defined object inside his current program does not modify the behavior
of the imported object. That issue is in general related to theorem
proving in equational theories for which the formalism of term
rewriting systems is especially suitable and efficient.
While OBJ was designed for context independent computation,
Kirchner feels that her work provides a first step to the development
of validation tools for context dependent languages. She feels (along
with Goguen and Meseguer) that situation theory provides a new logic
that is well suited to providing the semantics of such languages, and
she expects to turn to that application when her work on OBJ is
completed.
ED ZALTA
Zalta received his PhD in Philosophy from the University of
Massachusetts and then taught a year each in the Philosophy
Departments of the University of Auckland in New Zealand and at Rice
University before coming to CSLI. His interest was in foundational
issues in metaphysics and the philosophy of language, and the basic
conclusions he had reached seemed similar to some of those of Jon
Barwise and John Perry.
His major effort at CSLI has been to extend the axiomatic theory of
objects and relations developed in his book, "Abstract Objects"; for
example, he has extended his theory of worlds to account for moments
of time and to explain the structural similarities between worlds and
times. And he has designed a comprehensive intensional logic which
avoids the basic problems of Montague's logic. These results have
been incorporated into a new manuscript entitled "Intensional Logic
and the Metaphysics of Intentionality". Other papers he has written
during his fellowship include: "Referring to Fictional Characters: A
Reply", "Logical and Analytic Truths Which Aren't Necessary", and
"Lambert, Mally, and the Principle of Independence". These have been
presented in talks at the Eastern and Pacific Division meetings of the
American Philosophical Association and at the Berkeley Cognitive
Science Seminar.
Zalta enjoys teaching and has taught three courses in Stanford's
Philosophy Department while at CSLI. In the spring of 1985, he held a
seminar on "Nonexistent Objects and the Semantics of Fiction". During
the autumn and winter quarter of the 85-85 academic year, he and
Julius Moravcsik conducted the core seminar in metaphysics and
epistemology, focusing on the nature of events. And in the winter
quarter of this year he taught an undergraduate course on the history
of philosophy from Descartes to Kant.
He has found CSLI to be "a place where you can maximize your
abilities in whatever discipline you're in -- there is always someone
around to answer your questions". He has discovered more applications
of his theoretical approach than he had originally anticipated, and
has learned what it takes to make the approach interesting to others.
---------------------
end of part 6 of 7
-------
∂15-May-86 2210 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 7
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 22:09:54 PDT
Date: Thu 15 May 86 16:13:43-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 7
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
CSLI SNAPSHOTS: LUCY SUCHMAN
As multi-disciplined as CSLI is, one does not usually see
anthropology listed among the disciplines represented here. But in
fact, Lucy Suchman, an anthropologist at Xerox PARC, is a valued
participant in CSLI activities.
Suchman is a member of the research staff in PARC's Intelligent
Systems Lab. She came to PARC seven years ago as a UC Berkeley
graduate student. Her training and research plans concerned the study
of practical activities and structures of interaction among people,
and she supposed PARC's computers would play no role in her work. But
she found PARC researchers asking some of the same questions she had
been asking, with the difference being that for them the computer was
one of the interacting agents. She began to wonder about the
relationship between what she knew about people interacting with each
other, and people interacting with machines. Her dissertation became
an effort to clarify that relationship, and focused on the role of
language, actions, and embedding circumstances in shared
understanding.
She was first drawn to CSLI by a seminar entitled "Why Context
Won't Go Away". CSLI's thesis that a theory of information should
account for its flow between situated agents -- human or computer --
seemed to coincide with her own ideas. She has for a long time been
following the work of Barbara Grosz and others concerned with the role
of context in dialogue. More recently she has become an active
participant in the Representation and Reasoning project, and continues
to be an interested observer of results coming from the Rational
Agency group. She is interested in a general account of
representation, and specifically in applying the account to the case
of plans and situated actions. She conceives of plans as
representations of action, and seeks to understand the relation of
plans to actions in concrete situations.
Recently, Suchman has become interested in efforts at PARC to
develop computer tools to aid collaboration. In a new research
project, she is asking basic questions about the nature of
collaboration, and is looking at two potential applications: the use
of computers to support and record the flow of information during
meetings, and the use of computers for note keeping by two or more
individuals working collaboratively on the same project.
From CSLI's point of view, the questions Suchman asks from an
empirical vantage point provide vital hooks to the real world. In
turn, she sees CSLI as an extension to her interdisciplinary life in
the Intelligent Systems Lab, and makes a point worth noting about such
labs. While much of her time is spent in building bridges between her
field and those of her colleagues, she is careful never completely to
cross over them. She feels the value of interdisciplinary research is
best realized when each researcher has a clear view of his or her own
field of choice, and does what he or she does best. For example, she
herself collaborates with designers at PARC, raising questions,
discussing research findings, and suggesting possible implications.
But ultimately she leaves the design decisions in their hands. She
believes that the strength of interdisciplinary work, ideally, comes
from the interaction of multiple, equally penetrating, but different,
perspectives on a common subject matter.
---------------------
GIANTS FOLD IN NINTH; CSLI PRESENCE BLAMED
By our Special Correspondent
As twenty representatives of CSLI looked on, the San Francisco
Giants last Saturday contrived to blow a two-run lead over the Chicago
Cubs in the top of the ninth inning. Pitcher Scott Garrelts took a 4-2
lead into the ninth (both Cubs runs off Ron Cey homers), but was then
yanked for Greg Minton, who gave up four runs. An abortive rally in
the bottom of the ninth gave local fans a moment of hope, but it
proved too little too late, and the Cubs emerged 6-5 victors.
In post-game interviews, many of the Giants blamed the loss on the
presence of the CSLI representatives. Garrelts, for example, claims to
have misheard a heated argument between John Perry and Brian Smith
about Donald Davidson's theory of indirect discourse, taking their
references to "On Saying That" to have meant "Ron Cey: in, fat" --
i.e., pitch Cey an inside fat pitch. At another point, Chili Davis was
tagged out running from third on a suicide squeeze when batter Rob
Thompson missed a bunt; Thompson later said that he had been confused
by a remark made in a discussion of perception verbs about the "scene
of A", which he heard as "swing away".
Over and above such distractions, however, the Giants claimed to
have been disconcerted by the presence in the stands of a group whose
philosophical and methodological commitments seemed to many to be
alien to the spirit of the National Pastime. As manager Roger Craig
put it: "Look, I been hearin' about `situations' ever since I came up
-- you got your hit-and-run situation, your squeeze situation, your
bunt situation, your brushback situation -- but I never heard of any
of these `actual' and `factual' types of situations, and if you ask
me, it's this kind of thing is going to ruin baseball. And all this
talk about designation -- we got enough trouble with the designated
hitter. What do they want, designated pitchers and runners and all
like that?" General manager Al Rosen added: "I hear these guys are
all gung ho about representation. Well let me tell you, it's too much
representation that's driving up the salaries nowadays and making it
impossible for a team to break even if they play over .500." Even
baseball commissioner Peter Uberroth got into the act. "What with the
problems we're having trying to clean up the game, the last thing we
need is a bunch of people who are into ontological promiscuity, and
who countenance six or seven different kinds of relations."
These reservations aside, however, the Giant front office said that
a CSLI group would be welcome at future contests, though denying
persistent rumors that Giant owner Bob Lurie was preparing to
establish a permanent postdoctoral fellowship for anyone versed in
philosophy of personal identity who could demonstrate ability to hit
the curve ball.
-------------
Editor's note
Letters to the editor are welcome. Please send correspondence to me at
CSLI or by electronic mail to BETSY@CSLI.
-------------
-Elizabeth Macken
Editor
-------
∂16-May-86 0922 EMMA@SU-CSLI.ARPA CSLI Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 16 May 86 09:22:13 PDT
Date: Fri 16 May 86 08:39:46-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Calendar update
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
Late announcement:
CSLI COLLOQUIUM
Media Technology, The Art and Science of Personalization
Prof. Nick Negroponte, M.I.T. Arts and Media Lab
(formerly the Architecture Machine Group)
May 22, 4:15pm, Redwood Hall, G-19
As people look toward uncovering what constitutes expertise in one
field or another, there is a noticeable absence of interest in expert
systems wherein you or me are the object of the expertise. The art of
having a conversation includes substantial intelligence beyond the
domaine of discussion. This presentation will outline some of the
work on-going (and past) at MIT's Media Laboratory, illustrating
potentials for sensory-rich communications with computers.
-------
∂16-May-86 1020 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 20:52:18 PDT
Date: Thu 15 May 86 16:11:24-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 5
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
PHONOLOGY AND PHONETICS
Paul Kiparsky
Project Participants: Mark Cobler, Carlos Gussenhoven, Sharon
Inkelas, Paul Kiparsky (Project Leader),
Will Leben, Marcy Macken, Bill Poser,
Meg Withgott
Goals
This project is focused on postlexical phonology and its relation to
lexical phonology on the one hand, and to phonetic realization on the
other. We have been concentrating on three overlapping areas:
(1) tone and intonation (Leben, Poser),
(2) phonological phrasing and phonological processes which
apply in phrasal domains (Kiparsky, Poser), and
(3) formal properties of phonological rules and
representations (Kiparsky, Poser).
These are traditional concerns of phonology and (in part) of
phonetics, but we are approaching them in a somewhat new way which
seeks to unify those two disciplines and to integrate them with
linguistic theory. From that perspective, the important desiderata
are: (1) to fit the quantitative data obtained from instrumental
phonetic work into a phonological model that has independent
theoretical support, instead of constructing models on a more or less
ad hoc basis, (2) to construct substantial rule systems rather than
focusing, as is possible in some kinds of phonetic and phonological
research, on isolated rules or phenomena, and (3) to develop a
phonological theory consistent with a restrictive theory of grammar
such as those emerging from ongoing work at CSLI and elsewhere --
ambitions which, needless to say, have not made our lives any easier,
though they have made them a lot more interesting.
Tone and Intonation
Intonation in Tone Languages. Leben and Poser have collaborated on a
project on intonation in tonal languages (languages in which words
have different inherent pitch patterns), a topic about which almost
nothing is known. Most of the work has gone into analyzing data on
Hausa intonation that Leben collected in Nigeria last year, with the
help of Cobler and Inkelas (Leben, Cobler, and Inkelas 1986). They
discovered that a number of different intonational phenomena in Hausa
depend for their realization on phrase boundaries. These boundaries
are not typical phonological phrases (in particular, they are not in
general separated from one another by pauses); rather they correspond
to major syntactic boundaries, between NP and VP, and between V and
the different NP and adverbial complements of the verb. Drawing on
other work in autosegmental phonology, they propose that there is a
separate tier on which phrasal tone is represented, distinct from the
tier on which lexical tone is represented. By associating both the
High phrasal tone associated with the extra-High register used for
questions and for emphasis and the Low phrasal tone which describes
downdrift, they have been able to account formally for the apparent
complementarity of register raising and downdrift. They also offer an
alternative explanation of apparent evidence for utterance preplanning
in Hausa, namely that syntactic phrases may be preplanned but that
downdrift itself is not.
Pitch Accent. Withgott has continued her joint research with
Halvorsen on the phonetics and phonology of East Norwegian accent. In
a previous study (Withgott and Halvorsen, 1984) they argued that the
prosodic phenomenon of accent in Norwegian depends on the placement of
stress, morphological composition, and on regularities in the lexical
and postlexical phonology (rather than on a syllable-counting rule).
Using data derived from a computer-readable dictionary, they have now
(Withgott and Halvorsen, forthcoming) been able to provide further
support for their analysis through a quantitative study of the
accentual properties of compounds. Moreover, they have been able to
demonstrate that their account correctly predicts hitherto unobserved
phonetic differences between accents "1" and "2". This finding
disconfirms previous analyses which maintain that the two accents
reflect only one phonetic contour displaced in time.
Intonation Seminar. During the spring quarter, Leben, Gussenhoven,
and Poser are conducting a seminar on intonation. It covers background
material as well as current work being done at CSLI and elsewhere.
Participants include Withgott, Jared Bernstein (SRI), Ann Cessaris
(Key Communication in Menlo Park), Anne Fernald (Psychology), and a
number of Linguistics students.
Phrasal Phonology
Questions being addressed here include: How is phonological phrasing
related to syntactic structure? Can syntactic structure condition
phonological rules directly, or only indirectly via phrasing? How do
the properties of phrasal phonological rules differ from those of
lexical rules and of postlexical rules which apply across phrasal
domains? Where do so-called "phonetic rules" fit into the emerging
picture of the organization of the phonological component?
The reason these questions are up in the air is that several recent
developments have made untenable the hitherto standard picture of the
organization of phonology. According to this standard picture, the
rules of the phonological component map underlying representations
onto phonetic representations, which encode the linguistically
determined aspects of pronunciation; phonetic representations are in
turn related to the observed speech signal by largely universal rules
of phonetic implementation. One reason why this view bears rethinking
is that the theory of Lexical Phonology (Kiparsky 1982, 1985; Mohanan
1982) posits the existence of a linguistically significant
intermediate level, the level of lexical representation. The rules
which map underlying representations onto lexical representations turn
out to have very different properties from the rules which map lexical
representations onto phonetic representations. Secondly, research in
phonetics (Liberman and Pierrehumbert, 1984) suggests that there exist
language-particular context-sensitive rules which manipulate low-level
continuously-valued parameters of the sort assumed to be
nonphonological in character. Third, studies of connected speech
(Selkirk, 1984) have led to the postulation of a prosodic hierarchy
which governs the application of phonological processes to
combinations of words.
These were originally separate lines of investigation, but Poser
and Kiparsky are finding that considerations from all three converge
in a surprising way: there appears to be a fairly clear-cut division
of postlexical rules onto two types, "phrasal" and "phonetic" rules,
which differ with respect to conditioning, domain, and discreteness as
follows:
PHRASAL RULES PHONETIC RULES
o subject to morphological-lexical o subject to phonological
conditioning conditioning only
o restricted to minor phrases o applicable also in larger
prosodic units
o manipulate discrete feature o manipulate continuous values
values
Table 1. A possible general typology of postlexical rules.
The same typology appears to extend to cliticization processes as
well.
We are currently investigating the possibility of assigning the two
types of postlexical rules to different modules of grammar, and
explaining their properties by the principles of those modules.
Formal Properties of Rules and Representations
Underspecification and Constraints on Rules. One of the basic ideas
of Lexical Phonology is that lexical representations are incompletely
specified and receive their nondistinctive feature specifications from
the phonological rules of the language and from universal default
rules. Recently, Kiparsky has explored the possibility that this
underspecified character of lexical representations explains certain
well-known properties of phonological rules which have so far been
accounted for by means of a range of unrelated constraints. One such
property is the restriction of rules to "derived environments" (the
"Strict Cycle Condition"). Another is the commonly encountered
failure of rules to apply if the undergoing segment is in a branching
constituent ("C-command"). Both are derivable from the proper
formulation of underspecification and the principles governing the
application of default rules. This makes it possible to impose
significant constraints on the role of syntactic information in phrase
phonology.
Underspecification and Overgeneralization. A tough problem for
linguistic theory is how learners infer abstract grammatical
structures and prune overly-general rules without explicit negative
information (i.e., without explicit correction). Marcy Macken has
developed an account of phonological acquisition that promises to
solve this long-standing puzzle. Her model distinguishes formal
(algebraic) structures of phonological representations, semantic
(particularly stochastic and geometric) properties of phonetic
interpretation, and the nonformal informational structures across time
in the environment. This has lead to an investigation of the role of
underspecification and default mechanisms in the overall organization
of the phonological grammar and consideration of constraints on the
formal system that come, not from properties of the abstract system,
but from properties of its extensional system.
Rules and Representation. Poser has been continuing to work on a
theory of phonological rules. This effort is intended both to
establish a more highly constrained system than has hitherto been
available, based upon general principles rather than ad hoc
constraints, and to provide a conceptual analysis and formalization of
the relevant notions. Recent results include a unified account of the
class of phenomena involving exempt peripheral elements, which
constrains the exempt material to single peripheral constituents
(Poser, 1986b), and work on the role of constituency in phonological
representations (Poser, 1986a). The latter bears on the relationship
between phonological representations and phonological rules and
especially on the way in which phonological representations transmit
information. The central point is that the motivated phonological
representation of stress permits the transmission of information about
the morphological structure that would otherwise be prohibited.
References
Kiparsky, P. 1985. Some Consequences of Lexical Phonology. In Colin
Ewen (ed.), Phonology Yearbook, Vol. II. Cambridge University Press.
Kiparsky, P. 1982. Lexical Morphology and Phonology. In I.-S. Yang
(ed.), Linguistics in the Morning Calm. Seoul: Hanshin.
Liberman, M. and Pierrehumbert, J. 1984. Intonational Invariance
Under Changes in Pitch Range and Length. In Mark Aronoff and Richard
Dehrle (eds.), Language Sound Structure. Cambridge, MA: MIT Press.
Mohanan, K. P. 1982. Lexical Phonology. Thesis, MIT. Reproduced by
Indiana University Linguistics Club.
Poser, W. (a). 1986. Diyari Stress, Metrical Structure Assignment,
and Metrical Representation. Fifth West Coast Conference on Formal
Linguistics, University of Washington, Seattle, Washington, 22 March
1986.
Poser, W. (b). 1986. Invisibility. GLOW Colloquium, Girona, Spain, 8
April 1986.
Selkirk, E. 1984. Phonology and Syntax: The Relation between Sound
and Structure. Cambridge, MA: MIT Press.
Withgott, M. and Halvorsen, P.-K. 1984. Morphological Constraints on
Scandinavian Tone Accent. Report No. CSLI-84-11.
Withgott, M. and Halvorsen, P.-K. To appear. Phonetics and
Phonological Conditions Bearing on the Representation of East
Norwegian Accent. In N. Smith and H. van der Hullot (eds.),
Autosegmental Studies on Pitch Accent. Dordrecht: Foris.
end of part 5 of 7
-------
∂16-May-86 1026 EMMA@SU-CSLI.ARPA CSLI Monthly, No. 3, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 15 May 86 21:36:47 PDT
Date: Thu 15 May 86 16:12:40-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, No. 3, part 6
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
FINITE STATE MORPHOLOGY (FSM)
Lauri Karttunen
Project Participants: John Bear, Lauri Karttunen (Project Leader),
Ronald Kaplan, Martin Kay, Bill Poser,
Kimmo Koskenniemi (by correspondence),
Mark Johnson
The basis for most of the work within the FSM group is the
observation that phonological rules can be converted to finite state
transducers. A transducer is an automaton with two input/output
heads. Such machines are computationally very efficient and their
efficiency can be further improved by merging several transducers into
a single one. Another benefit is that the system is bidirectional: it
can be used either to relate a surface string to a set of possible
lexical counterparts or to compute all the possible surface
realizations of a sequence of lexical representations. The conversion
of phonological rule systems to automata rests on elementary
operations of finite state machines: union, intersection,
complementation, determinization, and minimization. In order for the
conversion to be feasible practically, the algorithms for these basic
operations must be implemented very efficiently because the size of
the automata that need to be manipulated can grow very large even if
the ultimate outcome is compact.
Kaplan and Kay have worked for several years to produce the basic
set of tools for this type of computational phonology, and are now
very close to completion. In the last few months, Kaplan has
re-implemented many parts of his FSM package to increase its
efficiency; certain time-consuming tasks, such as determinization, can
now be performed in a fraction of the time they used to take. Using
an earlier version of this package, Koskenniemi has completed the
first version of a rule compiler that takes a set of two-level rules
and produces the set of corresponding automata for a bidirectional
analyzer/generator.
Because of these technological advances, computational linguistics,
which for a very long time has been preoccupied with syntax and
semantics, has finally made contact with phonology and morphology.
The task for the immediate future is to make the new facilities
generally available and to publicize their existence. To this end, we
organized a successful workshop on this topic at CSLI last summer.
Bear has implemented a new morphological analyzer in PROLOG. Like
its predecessor, Bear's new analyzer is based on Koskenniemi's
two-level model. It regards phonological rules as constraints between
lexical and surface realizations of morphemes and provides a formalism
(less general than Koskenniemi's) for expressing simple two-level
rules. Unlike most other implementations, the analyzer uses these
rules directly, rather than the corresponding finite state
transducers. Thus, the user avoids the labor of expressing
constraints in the form of automata. Another characteristic of the
analyzer is that word-internal syntax is handled by means of phrase
structure rules augmented with attribute-value matrices.
The emphasis of the work done so far has been on concatenative,
segmental phonology. Work in progress extends the approach in new
directions. Kay has worked out a multi-tiered finite state analysis
of Arabic morphology; Mark Johnson has provided an account of tone in
Kikuyu.
The computational work on automata also appears to be relevant
within the context of the project on Foundations of Grammar and other
CSLI projects which are exploring the notion of unification. As
William Rounds and Ronald Kaplan have pointed out, directed graphs can
be viewed as finite state machines. From this point of view,
unification of feature value matrices is analogous to determinizing
the union of two automata. We will investigate whether this
observation has some practical value.
---------------------
JAPANESE SYNTAX WORKSHOP
The second in a series of three workshops on Japanese Syntax was
held at CSLI on March 7 - 9, 1986. The series is being funded by the
System Development Foundation, and includes participants from
institutions throughout the United States and Japan.
For the second workshop, syntax was broadly construed as covering
also discourse phenomena and the interface between morphology and
syntax. Discourse and morphology are of considerable theoretical
interest at present, and both are of particular interest in the case
of Japanese. Discourse factors are intimately entangled with Japanese
syntax -- in the overt marking of topics and the discourse-level
interpretation of reflexives, for example -- and there is a long
tradition of work in this area by scholars such as Mikami Akira,
Susumu Kuno, and John Hinds. Morphosyntax is of interest because of
the large role played in Japanese by derivational morphology; at
present different theories assign different roles to the morphology,
and some interesting work was presented concerning the different
frameworks.
Several theoretical orientations were represented in the syntax
papers, including Government Binding Theory, Lexical Functional
Grammar, and Generalized Phrase Structure Grammar. Similarly, the
discourse paper represented Kuno's functional approach, Grosz's
centering framework, and Kamp's Discourse Representation Theory, with
commentary by Hinds, a representative of the conversational analysis
approach. This confrontation of syntactic and discourse based
approaches resulted in intense discussions of whether the phenomena in
questions were best accounted for in terms of syntactic structure or
as a result of discourse factors and of the controversial role played
by structural configuration.
Participants felt that the quality of papers was high, and that
there had been ample discussion of the issues raised. They plan to
publish their papers and a summary of the discussion in a forthcoming
CSLI volume.
---------------------
CSLI POSTDOCTORAL FELLOWS
JEAN MARK GAWRON
After receiving his PhD in Linguistics from UC Berkeley, Gawron
accepted a postdoctoral fellowship at the University of Edinburgh to
work with Henry Thompson and others interested in artificial
intelligence. There he participated in a reading group on situation
semantics and wrote a paper on the status of types in situation
theory.
At CSLI, he has embarked on two bodies of research which he hopes
will reach a convergence point in some work on the semantics of
prepositions. The first is a continuation of his work on situation
theory and situation semantics which includes a sequel to his types
paper called "Types, Parameterized Objects and Information".
Situation theory is the enterprise of laying down the axiomatic
foundations of situation semantics; thus, he feels, a "complete"
situation theory ought to bear much the same relation to situation
semantics that set theory bears to Montague semantics. In this paper
Gawron proposes some new axioms, discusses their relative strengths
and their relationship to other axioms proposed (in particular) by
Barwise and Cooper, and suggests adopting a still somewhat
controversial proposal of Carl Pollard's. Several issues raised in
this paper became the focus of a number of meetings of the STASS
group.
He has also written (and delivered at this year's Berkeley
Linguistic Society Meeting) a paper called "Clefts, Discourse
Representations, and Situation Semantics". This paper proposed a
treatment of some well-known presuppositional properties of it-clefts
("It was Maria that John loved"), and related them to wh-clefts ("The
one John loved was Maria"). It did this in the context of a somewhat
altered situation semantics, proposing a view of linguistic meaning
that diverged slightly from the published accounts, and offering in
return what was hopefully a general framework for handling
conversational implicature or presupposition.
Gawron's second body of research concerns prepositions. When he
arrived at CSLI, he expected to continue some research he had begun on
preposition meanings, intending particularly to apply them to
morphemes in other languages that did semantically analogous work
(prefixes in Polish and Hungarian). He now doubts some of the basic
hypotheses of that work, and says he has instead found himself
"backing into the lexical semantics", reconsidering some of the
semantic assumptions he had made in "Situations and Prepositions."
This has led in turn to "resurrecting some of the frame-based lexical
representations in my dissertation, and to various discussions about
that work with members of the Lexical group." He has found
particularly valuable the work that Paul Kiparsky is doing on lexical
representations, grammatical relations, and morphology. The result is
that his view on how lexical representations and morphological rules
should interact has changed considerably from that advanced in his
dissertation, and, ".. as a kind of side effect, my views on
prepositions have changed as well". Some of these changes are
presented in a paper entitled "Valence Structure Preservation and
Demotion" (delivered at the 22nd Chicago Linguistics Society Meeting).
In summary, he says, "The direct result of both of these lines of
research is that I have had to revise many of the particulars of an
account of the semantics of prepositions that I gave in the types
paper written before I came here. That in turn prompted
reconsideration of some of the basic claims of the paper, which I am
now prepared to ignominiously abandon. So the current work in
progress is a return to English prepositions, with some recanting and
some canting again in different directions".
HELENE KIRCHNER
While still a graduate student in the Department of Computer Science
at the University of Nancy, Kirchner won a position at the Centre
National de la Researche Scientifique in Jean-Pierre Jouannaud's
research group. Jouannaud had been following the work of Joseph
Goguen and Jose Meseguer (see lead article), and encouraged her to
apply for a CSLI postdoctoral fellowship to facilitate an exchange of
ideas.
Kirchner is interested in developing programming languages with
advanced validation tools. In many applications of computer science
such as aeronautics and the control of complex processes, the problem
of software fallibility is crucial; validation of the correctness of
these huge programs requires programming languages capable of
providing high level specifications and verification tools.
It made sense to begin her work with a programming language that
already had a clear semantics and inference mechanism, and, in
particular, with Goguen and Meseguer's OBJ. OBJ is a high level
specification language for algebraic abstract data types; it has a
clean algebraic semantics based on initial "order-sorted" algebras
(algebras whose carriers are composed of different sorts with possible
inclusions between them). The theory of order-sorted algebras
supports function polymorphism and overloading, error definition and
error recovery, multiple inheritance and sort constraints, which
permit the definition of what would otherwise be partial functions as
total functions on equationally defined subdomains. The basic
entities are objects described by sorts, functions, and equations.
During her stay at CSLI she studied, specified, and implemented a
new version of the inference mechanism for OBJ. Based on order-sorted
rewriting, her implementation is a generalization of standard
rewriting taking into account the inclusion relation on sorts. It
preserves the characteristic features of the language such as
modularity, error handling and error recovery, and sort constraints.
The next step will be to provide validation tools for OBJ or more
generally for equational programming languages -- for instance, tools
that allow the user to specify that the importation of a previously
defined object inside his current program does not modify the behavior
of the imported object. That issue is in general related to theorem
proving in equational theories for which the formalism of term
rewriting systems is especially suitable and efficient.
While OBJ was designed for context independent computation,
Kirchner feels that her work provides a first step to the development
of validation tools for context dependent languages. She feels (along
with Goguen and Meseguer) that situation theory provides a new logic
that is well suited to providing the semantics of such languages, and
she expects to turn to that application when her work on OBJ is
completed.
ED ZALTA
Zalta received his PhD in Philosophy from the University of
Massachusetts and then taught a year each in the Philosophy
Departments of the University of Auckland in New Zealand and at Rice
University before coming to CSLI. His interest was in foundational
issues in metaphysics and the philosophy of language, and the basic
conclusions he had reached seemed similar to some of those of Jon
Barwise and John Perry.
His major effort at CSLI has been to extend the axiomatic theory of
objects and relations developed in his book, "Abstract Objects"; for
example, he has extended his theory of worlds to account for moments
of time and to explain the structural similarities between worlds and
times. And he has designed a comprehensive intensional logic which
avoids the basic problems of Montague's logic. These results have
been incorporated into a new manuscript entitled "Intensional Logic
and the Metaphysics of Intentionality". Other papers he has written
during his fellowship include: "Referring to Fictional Characters: A
Reply", "Logical and Analytic Truths Which Aren't Necessary", and
"Lambert, Mally, and the Principle of Independence". These have been
presented in talks at the Eastern and Pacific Division meetings of the
American Philosophical Association and at the Berkeley Cognitive
Science Seminar.
Zalta enjoys teaching and has taught three courses in Stanford's
Philosophy Department while at CSLI. In the spring of 1985, he held a
seminar on "Nonexistent Objects and the Semantics of Fiction". During
the autumn and winter quarter of the 85-85 academic year, he and
Julius Moravcsik conducted the core seminar in metaphysics and
epistemology, focusing on the nature of events. And in the winter
quarter of this year he taught an undergraduate course on the history
of philosophy from Descartes to Kant.
He has found CSLI to be "a place where you can maximize your
abilities in whatever discipline you're in -- there is always someone
around to answer your questions". He has discovered more applications
of his theoretical approach than he had originally anticipated, and
has learned what it takes to make the approach interesting to others.
---------------------
end of part 6 of 7
-------
∂20-May-86 1551 EMMA@SU-CSLI.ARPA Calendar update
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 20 May 86 15:51:07 PDT
Date: Tue 20 May 86 15:08:14-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar update
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
CSLI SEMINAR
Events and Modes of Representing Change
Carol Cleland
2:15, Thursday, May 22, Ventura Trailers
We ordinarily think of change as something which is inherently dynamic:
the shattering of a window, the flying of a bird, the explosion of the space
shuttle Challenger. That is to say, we think of change as involving
some kind of physically real process or activity. This process or activity
ostensibly provides the actual physical medium for the alteration of
conditions associated with the change.
In this light it is surprising how few of our modes of representing
change provide for any notion of process or activity. In contemporary
analytic philosophy, for instance, change is almost invariably
represented in terms of a mere difference in the properties instanced by
an object at different times. Similarly, change is often represented in
computation theory as a mere difference in discrete machine
configurations at different times.
It is my contention that such representations of change are inadequate.
Change involves more than a mere sequence of, in effect, durationless
entities. In this talk I will adumbrate an alternative account of
change--an account which takes seriously the notion that change involves
primitive activity. I will also argue that certain traditional
philosphical puzzles regarding the nature of events appear to be
resolvable if we incorporate such a notion of change into an account
of events.
-------
``Meaning and the Self''
by John Perry
The Henry Waldgrave Stuart Chair Inaugural Lecture
Friday, May 23, 8 pm, History Room 2
Reception to follow in Tanner Library, Building 90
-------
-------
∂21-May-86 1800 JAMIE@SU-CSLI.ARPA Calendar, May 22, No. 17
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 21 May 86 18:00:32 PDT
Date: Wed 21 May 86 17:03:26-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: Calendar, May 22, No. 17
To: friends@SU-CSLI.ARPA
!
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
May 22, 1986 Stanford Vol. 1, No. 17
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, May 22, 1986
12 noon TINLunch
Ventura Hall Reading: ``Conditional Propositions,'' Ch. 7, Inquiry
Conference Room by Robert Stalnaker
Discussion led by Chris Swoyer (Swoyer@csli)
2:15 p.m. CSLI Seminar
Ventura Hall Events and Modes of Representing Change
Trailer Classroom Carol Cleland (Cleland@csli)
(Abstract on page 3)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Media Technology, the Art and Science of
Room G-19 Personalization
Nick Negroponte, MIT Arts and Media Lab.
(Abstract on page 4)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, May 29, 1986
12 noon TINLunch
Ventura Hall Reading: ``A Speaker-based Approach to Aspect''
Conference Room by Carlota Smith
Discussion led by Dorit Abusch (Abusch@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall Why Language isn't Information
Trailer Classroom Terry Winograd (Winograd@csli)
(Abstract on page 3)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Natural Language as a Reflection of Cognitive
Room G-19 Structure
Bill Croft, Stanford & SRI International(Croft@sri-ai)
(Abstract on page 4)
--------------
!
Page 2 CSLI Calendar May 22, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S TINLUNCH
``A Speaker-based Approach to Aspect''
by Carlota Smith
Discussion led by Dorit Abusch (Abusch@csli)
Two components contribute to sentential aspect: situation aspect and
viewpoint aspect. Situation aspect is dependent on the aspectual
classification of verbs, time adverbials, etc. Speakers determine the
situation type of an actual situation and correlate it with the
appropriate linguistic form in their language. Speakers can also talk
about a situation from a certain viewpoint (or perspective) as either
perfective or imperfective (corresponding to simple tense or
progressive in English). The interaction between the viewpoint aspect
chosen by the speaker and the situation aspect determines sentential
aspect. This approach can explain the aspect of simple tense event
sentences in English as well as non-standard aspectual choices.
Although aspectual viewpoint in French (imparfait vs. passe compose)
is different from English, it interacts with situation aspect in a
similar way. Examples from other languages are also discussed.
(Note: The term ``situation'' used by Smith is not that employed in
situation semantics).
------------
THE HENRY WALDGRAVE STUART CHAIR INAUGURAL LECTURE
``Meaning and the Self''
by John Perry
Friday, May 23, 8 pm, History Room 2
(Reception to follow in Tanner Library, Building 90)
!
Page 3 CSLI Calendar May 22, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S SEMINAR
Events and Modes of Representing Change
Carol Cleland (Cleland@csli)
We ordinarily think of change as something which is inherently
dynamic: the shattering of a window, the flying of a bird, the
explosion of the space shuttle Challenger. That is to say, we think
of change as involving some kind of physically real process or
activity. This process or activity ostensibly provides the actual
physical medium for the alteration of conditions associated with the
change.
In this light it is surprising how few of our modes of representing
change provide for any notion of process or activity. In contemporary
analytic philosophy, for instance, change is almost invariably
represented in terms of a mere difference in the properties instanced
by an object at different times. Similarly, change is often
represented in computation theory as a mere difference in discrete
machine configurations at different times.
It is my contention that such representations of change are
inadequate. Change involves more than a mere sequence of, in effect,
durationless entities. In this talk I will adumbrate an alternative
account of change---an account which takes seriously the notion that
change involves primitive activity. I will also argue that certain
traditional philosophical puzzles regarding the nature of events
appear to be resolvable if we incorporate such a notion of change into
an account of events.
----------
NEXT WEEK'S SEMINAR
Why Language isn't Information
Terry Winograd (Winograd@csli)
In developing theories of language, researchers introduce formal
objects corresponding to meanings and try to develop rules relating
those objects. These rules may be more or less sophisticated in
taking into account context, utterance situation, etc., but they all
ground their account of linguistic meaning in terms of something that
lies ouside of language, whether it be truth conditions, possible
worlds, situations, or ``concepts''.
This seems to work well enough when dealing with simple
descriptions of perceived physical reality (``The cat is on the mat'',
``Snow is white'', etc.) but is far more difficult and less convincing
when applied to more realistic examples of languge use, either from
casual conversation (``You aren't kidding, are you?'' or from text
like this abstract.
I will argue that in basing theories of meaning on an articulation
of ``objects,'' ``properties'', etc. we never escape the domain of
language, and are really articulating the possible moves in a kind of
conversation. Much of the technical work done in semantics and
philosophy of language can be reinterpreted in this light, but it
leads to radically different overall obejctives and different
expectations about the potential for building computer programs that
could legitimately be said to ``understand'' or ``mean what they
say''.
The talk is based on parts of a book I have recently completed with
Fernando Flores, entitled Understanding Computers and Cognition, and
on discussions in the Representation and Reasoning group.
!
Page 4 CSLI Calendar May 22, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
THIS WEEK'S COLLOQUIUM
Media Technology, the Art and Science of Personalization
Nick Negroponte, MIT Arts and Media Lab.
As people look toward uncovering what constitutes expertise in one
field or another, there is a noticeable absence of interest in expert
systems wherein you or me are the object of the expertise. The art of
having a conversation includes substantial intelligence beyond the
domaine of discussion. This presentation will outline some of the
work on-going (and past) at MIT's Media Laboratory, illustrating
potentials for sensory-rich communications with computers.
----------
NEXT WEEK'S COLLOQUIUM
Natural Language as a Reflection of Cognitive Structure
William Croft, Stanford & SRI International
Natural languages and their structure generally provide the most
tractable and least nebulous evidence in cognitive science. Cognitive
science should (and frequently does) turn to linguistics for potential
hypotheses of general cognitive structure. Hence it is plausible to
ask if natural language structures reflect in some more or less direct
way cognitive structures of greater generality. The purpose of this
talk is to present a simple but nevertheless fundamental set of
hypotheses based on cross-linguistically universal generalizations,
whose validity would be worth testing in nonlinguistic cognitive
modalities.
The first and most naive proposal a cognitive scientist might
entertain is that human beings divide their experience (or whatever it
is) into parts, and consequently establish relations among those
parts. Natural language provides clues as to what parts experience is
divided into and what relations are used to hold those parts together.
The universal syntactic categories noun, verb and adjective are based
on the interaction of (1) a commonsense ontological classification
into objects, properties and actions, and (2) principles of organizing
information in discourse. The ``case hierarchy'' of subject, object
and oblique reflect the organization of the ``parts'' of experience
into a causal network of events with their participants, given a
discourse-determined selection of subject.
In addition to these hypotheses, a more general principle of cognition
is proposed: human beings select certain situation types as ``focal''
or ``natural'', and other, similar situation types are coerced into
the model provided by the ``focal'' situation types. The linguistic
manifestation of this principle is found in the distribution of
universal vs. typologically variable grammatical phenomena.
-------
∂28-May-86 1725 JAMIE@SU-CSLI.ARPA Calendar, May 29, No. 18
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 28 May 86 17:25:36 PDT
Date: Wed 28 May 86 16:48:15-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: Calendar, May 29, No. 18
To: friends@SU-CSLI.ARPA
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
May 29, 1986 Stanford Vol. 1, No. 18
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, May 29, 1986
12 noon TINLunch
Ventura Hall Reading: ``A Speaker-based Approach to Aspect''
Conference Room by Carlota Smith
Discussion led by Dorit Abusch (Abusch@csli)
2:15 p.m. CSLI Seminar
Ventura Hall Why Language isn't Information
Trailer Classroom Terry Winograd (Winograd@csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Natural Language as a Reflection of Cognitive
Room G-19 Structure
Bill Croft, Stanford & SRI International(Croft@sri-ai)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, June 5, 1986
12 noon TINLunch
Ventura Hall Reading: ``Symbolism: Its Meaning and Effect''
Conference Room by A.N. Whitehead
Discussion led by Carol Cleland
(Abstract next week)
2:15 p.m. CSLI Seminar
Ventura Hall On the Nature of the Intentional
Trailer Classroom Ivan Blair (Blair@csli)
(Abstract on page 2)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall Title to be announced
Room G-19 Julius Moravcsik
--------------
Page 2 CSLI Calendar May 29, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
NEXT WEEK'S COLLOQUIUM
On the Nature of the Intentional
Ivan Blair (Blair@csli)
After a period of banishment, the mental has been reinstated as an
object of scientific study; yet, just what IS the mental? Minds --
particularly those of humans and ``higher'' animals -- are central
examples of what I am calling the intentional, although I construe
intentionality more broadly. In this talk, I shall try to draw some
conclusions regarding the nature of the intentional and its place in
our theories of the world.
--------
-------
∂02-Jun-86 0846 JAMIE@SU-CSLI.ARPA [Carl Pollard <POLLARD@SU-CSLI.ARPA>: ESCOL 86]
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 08:46:52 PDT
Date: Mon 2 Jun 86 07:40:58-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: [Carl Pollard <POLLARD@SU-CSLI.ARPA>: ESCOL 86]
To: friends@SU-CSLI.ARPA
Mail-From: POLLARD created at 27-May-86 13:15:20
Date: Tue 27 May 86 13:15:20-PDT
From: Carl Pollard <POLLARD@SU-CSLI.ARPA>
Subject: ESCOL 86
To: friends@SU-CSLI.ARPA
ESCOL 86, Eastern States Conference on Linguistics, will be
jointly sponsored by the University of Pittsburgh and Carnegie-Mellon
University.
Dates: October 10-12, 1986
Invited Speakers: Charles Fillmore (Berkeley)
Lily Wong Fillmore (Berkeley)
Martin Kay (Xerox PARC)
George Miller (Princeton)
Added Attraction: Demonstrations of NLP Software
Theme of the conference is "Linguistics at work": we invite papers
on computational linguistics or language teaching, as well as on any
topic of general linguistic interest.
Send a 1 page anonymous abstract, with separate return
address, by US Mail to
ESCOL 86
Department of Linguistics
University of Pittsburgh
Pittsburgh, PA 15260
or by netmail to
Thomason@c.cs.cmu.edu.ARPA.
Abstract should arrive in Pittsburgh by June 13. Submitted papers will
be scheduled for 20 minutes, with 10 minutes for discussion.
-------
-------
∂04-Jun-86 1840 JAMIE@SU-CSLI.ARPA Calendar, June 5, No. 19
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 18:39:57 PDT
Date: Wed 4 Jun 86 17:39:58-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: Calendar, June 5, No. 19
To: friends@SU-CSLI.ARPA
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
June 5, 1986 Stanford Vol. 1, No. 19
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, June 5, 1986
12 noon TINLunch
Ventura Hall Reading: ``Symbolism: Its Meaning and Effect''
Conference Room by A.N. Whitehead
Discussion led by Carol Cleland (Cleland@csli)
(Abstract on page 2)
2:15 p.m. CSLI Seminar
Ventura Hall On the Nature of the Intentional
Trailer Classroom Ivan Blair (Blair@csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall AFT, Past and Prospects
Room G-19 Julius Moravcsik (Julius@csli)
(Abstract on page 2)
--------------
CSLI ACTIVITIES FOR NEXT THURSDAY, June 12, 1986
12 noon No TINLunch
Ventura Hall
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Ordinals and Mathematical Structure
Trailer Classroom Chris Menzel (Menzel@csli)
(Abstract on page 3)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall To be announced
Room G-19
--------------
!
Page 2 CSLI Calendar June 5, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ANNOUNCEMENT
Please note that as in past years, CSLI will not have regularly
scheduled Thursday activities during the summer months. The last
regularly scheduled events will be held Thursday, June 9, 1986.
Events will resume next September.
--------
THIS WEEK'S TINLUNCH
``Symbolism: Its Meaning and Effect''
by A.N. Whitehead
Discussion led by Carol Cleland (Cleland@csli)
According to Whitehead, there is no relationship between a ``symbol''
and its ``meaning'' which determines which is symbol and which is
meaning, or even that there shall be a referential relation between
the two. For Whitehead "symbolic reference" is an actual process or
activity on the part of a percipient whereby "symbol" and "meaning"
are united. This is in contrast to traditional accounts of the
referential relation as denotation.
----------
THIS WEEK'S COLLOQUIUM
AFT, Past and Prospects
Julius Moravcsik (Julius@csli)
AFT was introduced as a theory of lexical representation with the
following distinguishing features: a) Meanings determine extension
only partially, b) Meaning structures are composed of (at most) four
components c) by talking about the four meaning components we can give
the theory of lexical representation more empirical explanatory power.
This year's work expanded the theory considerably, showing how it ties in
with direct reference theory, with semantic predicate structure analysis,
and with accounts of linguistic competence. In the talk examples will be
given, showing how AFT analysis yields an interesting account of
verb-semantics and predicate argument structure, and what additional
factors are needed in order to specify fully reference.
Page 3 CSLI Calendar June 5, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
----------
NEXT WEEK'S SEMINAR
Ordinals and Mathematical Structure
Chris Menzel (Menzel@csli)
This talk will have two components, one semantical and the other
philosophical. I will begin with an account of the semantics of ordinals
in English as they occur in NPs like `The third man on the moon' and
`Seventeen of the first one hundred tickets'. The account will be
developed within the framework of generalized quantifiers, augmented by
work of Godehard Link on plurals.
I will then move to the philosophical problem that started me thinking
about these semantical issues in the first place, viz., the nature of
number. An influential movement in the philosophy of mathematics known
as ``structuralism'' claims that mathematics is the study of abstract
structure per se, and not of a realm of peculiarly mathematical objects
like ordinal numbers at all. Indeed, structuralists argue, any attempt
to find such objects is necessarily wrong-headed. For to identify any
particular objects as (say) THE ordinal numbers is in effect just to pick
out an INSTANCE of the structure which is the proper subject matter of
arithmetic (viz., the structure exemplified by all omega-sequences), and
not the structure itself.
I think structuralism is half right. Much of mathematics is in fact the
study of abstract structure, but I will argue that when we get clear
about what this comes to, there are natural accounts to be given of
several types of mathematical objects. In particular, I will revive an
old neglected doctrine of Russell's that the ordinal numbers are
(roughly) abstract relations between objects and structured situations of
a certain kind. I'll then point out why this doesn't run afoul of the
structuralist argument above. I'll close by showing that this view of
the ordinals is implicit in the semantics given in the first part of the
talk.
-------
∂11-Jun-86 1537 EMMA@SU-CSLI.ARPA Calendar, June 12, No. 20
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 11 Jun 86 15:37:37 PDT
Date: Wed 11 Jun 86 14:55:33-PDT
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Calendar, June 12, No. 20
To: friends@SU-CSLI.ARPA
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
June 12, 1986 Stanford Vol. 1, No. 20
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, June 12, 1986
12 noon TINLunch
Ventura Hall No TINLunch
Conference Room
2:15 p.m. CSLI Seminar
Ventura Hall Ordinals and Mathematical Structure
Trailer Classroom Chris Menzel (Menzel@csli)
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Colloquium
Redwood Hall No colloquium
Room G-19
--------------
ANNOUNCEMENT
Please note that as in past years, CSLI will not have regularly
scheduled Thursday activities during the summer months. The last
regularly scheduled events will be held Thursday, June 12, 1986.
Events will resume next September. The CSLI calendar will also be
suspended for the summer.
-------
∂24-Jun-86 1615 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 1
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 16:14:54 PDT
Date: Tue 24 Jun 86 15:28:43-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: CSLI Monthly, Vol. 1, No. 4, part 1
To: newsreaders@SU-CSLI.ARPA
C S L I M O N T H L Y
-------------------------------------------------------------------------
June 1986 Vol. 1, No. 4
-------------------------------------------------------------------------
A monthly publication of the Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
------------------
CONTENTS
Meaning and Mechanism
by Stanley Rosenschein part 1
Project Reports
Rational Agency (RatAg)
by Michael Bratman and Amy Lansky part 2
Embedded Computation (EC)
by Brian C. Smith part 3 - 4
Analysis of Graphical Representation
by David Levy part 5
Grammatical Theory and Discourse Structure
by Joan Bresnan and Annie Zaenen part 5
AFT Lexical Representation Theory
by Julius Moravcsik part 6
Visual Communication
by Alexander Pentland part 6
John Perry's Inaugural Lecture for
the Henry Waldgrave Stuart Chair part 7
CSLI Postdoctoral Fellows: Peter Sells part 7
CSLI Snapshots: Martha Pollack part 7
New CSLI Publications part 7
------------------
MEANING AND MECHANISM
Stanley Rosenschein
CSLI was founded (and funded) on the conviction that certain strands
of research in philosophy, linguistics, artificial intelligence, and
computer science could be synthesized into a coherent, mathematically
grounded science of language and information and that such a synthesis
would constitute an important intellectual advance with significant
technological consequences. Although it is hard to predict when this
synthesis might be achieved, the process of {\it trying} to achieve it
is itself worth reflecting upon. Rather than attempt to comment on
this process in the large, I would like to report anecdotally and from
a very personal perspective how my own research in artificial
intelligence has been affected by some of the disciplines represented
at CSLI and by the CSLI experience itself.
Paths to AI
The motivations of AI researchers are varied, but, for me, one of the
most important is a desire to see the technological fruits of AI
within my lifetime. The public has been conditioned to expect
machines with the intelligence of C3PO; I would like to do my part to
help science catch up with Lucas. Although my work is sometimes
regarded as theoretical, I approach AI theory from a utilitarian point
of view. With only one lifetime to spend, one's time must be invested
wisely, and I feel that theoretical work is likely to bring a higher
technological return per unit investment of intellectual energy.
When, as an undergraduate, I moved from sociology to computer science,
one of the most refreshing aspects of my new-found field was the way
in which the most complex edifices could be constructed from the
simplest of primitive components. It felt like tinker toys, but with
parts that were more elementary and at the same time more universal: a
nand gate to get all the Boolean functions, Boolean functions and a
delay element to get all finite machines, a finite machine and a tape
of zeroes and ones to get all of computation. The primitives were so
simple and clear; the complexity was entirely in the arrangements of
basic elements. Often these arrangements could themselves be generated
combinatorially from simpler structures, the properties of the whole
following logically from the properties of the parts. Moreover, these
objects were not static like buildings or graphs but rather microcosms
of physical reality that unfolded in the temporal dimension as well as
the spatial with a comforting inexorability and reproducibility. As a
sociology student, I had been frustrated by the vagueness and
ambiguity of the fundamental concepts of that field; computer science
satisfied my desire for certainty and simplicity.
Unfortunately, my real interest was in that segment of computer
science known as AI, where it was vagueness and ambiguity all over
again! From a computer science perspective, of course, AI systems were
simply computer programs exhibiting certain interesting behaviors. As
such, they were amenable to rigorous characterization in strictly
operational terms. I could not escape the feeling, however, that
there was something more to be said about the *content* of the
computations. AI researchers recognized this but often chose to
describe the content of AI systems in vague, mentalistic terms. The
AI literature was filled with words like ``planning,'' ``reasoning,''
``problem-solving,'' ``heuristic search,'' ``knowledge
representation'' and (naturally) ``intelligence.'' After several
years of graduate school I began to realize that although the actual
AI programs were concrete enough, the terms in which they were being
explained were not unlike what I had left behind in the social
sciences. Did AI have no fundamental *technical* concepts from
which everything else could be constructed? I longed for the zeroes,
the ones, and the nand gates!
Still, the goals of AI stirred me in a way that operating systems
couldn't, and I began to search for the ``hard science'' of AI.
Quest for a Formal Framework, Part I: The Road to Logicism
Some observers of the AI scene like to classify AI researchers
according to whether they stress mathematical approaches (the it
neats) or intuitive programming (the scruffies). As might be
expected, I was attracted immediately to the former and began to seek
out islands of neatness in the scruffy seas of AI.
One such island was natural language parsing. I had taken an
undergraduate course in linguistics at Columbia from Labov, who had
communicated both the substantive content of generative grammar and an
enormous enthusiasm for the science of language. The relation between
machines and the (syntactic) structure of languages had been studied
mathematically and thus constituted a natural theoretical bridge
between things ``cognitive'' and things computational. Parse trees
could be easily regarded both as abstract mathematical objects
suitable for characterizing syntactic structures and as data
structures to be represented and manipulated in a machine.
In the case of semantics, the mathematical objects diverged somewhat
from the computational objects. Montague's PTQ, for instance, was
quite impressive in its subtlety and rigor but was filled with
model-theoretic objects that were not suitable to be represented
directly in a machine. Still, it seemed possible to adapt logical
systems like Montague's to the needs of computational linguistics by
having the computer manipulate symbolic formulas (e.g., well-formed
formulas of intensional logic) that stood for the model-theoretic
objects (e.g. functions from possible worlds to truth values). An
obvious strategy for desiging a natural-language system was to have
it parse sentences, translate them into logical formulas, and carry
out deductive operations on the result.
This strategy for language processing fit in well with the prevalent
``neat'' approaches to the broader area of knowledge representation
and reasoning as formulated by John McCarthy, Nils Nilsson, Pat Hayes,
Bob Moore, and others. According to this view, AI research should
proceed roughly as follows: formalize commonsense knowledge in a
suitable logical system (McCarthy's preference is first-order logic
with some accommodation for nonmonotonic reasoning) and program a
computer to manipulate data structures representing formulas of this
logical system. McCarthy's own research has emphasized the content of
the agent's knowledge (striving for what he calls ``epistemological
adequacy'') and has de-emphasized the actual computational strategies
for inference (or what he calls ``heuristic adequacy.'') Other
researchers have taken up the slack by focusing on automated
deduction, which is now an extensive research area in its own right.
Logic certainly seemed precise enough to satisfy my neat instincts,
and in fact, the whole strategy seemed entirely reasonable: Take
commonsense concepts like ``knowledge'' and ``reasoning'' and
operationalize them as precise technical notions like ``formulas'' and
``deduction.''
Formal Architectures for Intelligent Agents
My next few years were spent trying to extend and refine this picture
into an integrated formal model of a rational agent based on the
commonsense concepts of belief, desire, and intention--roughly the
current research program of the Rational Agency group at CSLI. In
the model, these propositional attitudes were to be operationalized
computationally as logical formulas interpreted semantically by the
designer and manipulated formally by the program. Formulas would be
added to and deleted from belief, desire, and intention data bases
according to processes corresponding to belief revision, inference,
planning, etc. These processes would satisfy certain principles or
constraints which would be specified rigorously and would serve as a
precise blueprint for implementing the agent. One example of such a
principle might be that deductive inferences should be sound relative
to the intended interpretation of the belief language. Another might
be that intentions should be ``rational'' relative to beliefs and
desires in some well-defined technical sense. Computational
operations on the data bases would be designed to preserve these
properties as invariants.
With this broad model of rational belief and action in mind, I
decided to try to apply it to the implementation of an actual system:
an experimental mobile robot. Nils Nilsson and I initiated a robot
project modeled after the earlier Shakey project and aimed at
constructing an integrated computer individual that could perceive,
reason, and act in smooth interaction with its environment. At the
practical level, work began at SRI on the construction of the robot
itself (Flakey). At the theoretical level, I began participating in
the planning and practical reasoning seminar at CSLI where
philosophers and AI planning researchers were discussing
belief-desire-intention models and their possible realizations as
computer programs.
As this project progressed, I began to have doubts about the grand
strategy of basing the implementation of AI systems on
folk-psychological notions, especially with propositional attitudes
operationalized as logical formulas in data bases. The reasons for
this were several:
Intractability of deduction. There seemed to be severe
difficulties in adopting automated deduction as a processing strategy,
especially for real-time systems such as the one we were trying to build.
The content of our commonsense knowledge is quite rich by any account.
The inference problem for formal systems that are adequate to express
this content ordinarily exhibits a high degree of compuational
complexity. Heuristic strategies are unsatisfying because it is
difficult to formulate generalizations about when they would work and
when they wouldn't, and engineering practice has taught us not to be
overly optimistic in this regard.
Inapplicability to special-purpose representations. It was
widely assumed, even by the advocates of logical representation
languages, that some parts of a cognitive agent made use of
special-purpose, nonlogical representations. For instance, no one
seriously proposed deduction as the operative mechanism at the lowest
levels of perception, e.g., for stereo matching. I was puzzled by the
question of how semantics could be assigned to these special
representations and why there was discontinuity in the analysis.
Could representations become more logic-like by degrees?
Lack of concrete guidance.} If the component elements of the
abstract specification, e.g., propositions, are not associated
with particular data objects, e.g., formulas, then the attribution of
propositional attitudes seems to be a global constraint that gives
the implementer little guidance in the detailed work of building the
program by parts.
The arbitrariness of interpretation. In most current AI systems,
even those designed by ``neats''and based on logical representations,
the attribution of content to the program's states depends crucially
on the intuitions of the programmer and is not an objective property
of the program.
Quest for a Formal Framework, Part II: Situated Automata
These feelings of frustration in trying to relate mechanism to content
had been growing over time, but they were catalyzed into a new
research direction by a single experience at CSLI. It was a remark
made by Michael Bratman during the planning and practical reasoning
seminar the first year of CSLI. I was in a computational frame of
mind, having just given a talk on how we could describe the
computational state of an agent as encoding beliefs, desires, and
intentions over time. Michael was trying to explain how desires
differed from intentions. ``Intentions control behavior, whereas
desires only influence behavior.'' I recall that utterance as a kind
of conversion experience. My reaction was: What could that possibly
mean computationally? Outputs of a machine are not ``influenced'' by
states of the machine; they functionally depend on them.
Furthermore, at the most mechanistic, automata-theoretic level, the
states of the machine similarly depend on the inputs, i.e., are a
function of them (and the initial state of the machine.) And these
inputs depend on states of the environment. So, indirectly, the
states of the machine depend on the state of the environment. Jon Barwise
and Perry had described how mental states might be viewed as
``classifying'' external situations, and Fred Dretske had developed a
theory of information based on correlation. It seemed reasonable to
define the information encoded in the state of a machine by
considering what states of the environment it was correlated with. A
technical reconstruction of this notion turned out to be virtually
identical to a class of models of knowledge that was being
investigated independently by Joe Halpern and other theoretical
computer scientists studying distributed computation.
The technical consequences of this idea are still being explored, but
one technologically significant one is the elimination of the reliance
on symbolic formulas and deduction as the only way of bridging the gap
between meaning and mechanism. The situated-automata model provides a
mathematical formulation of the relationship between the behavioral
mechanisms of physical systems (such as organisms and computers) and
the propositional information content that can be ascribed to the
states of such systems.
On Balancing Meaning and Mechanism in Theoretical AI
In a sense, the tension between content and mechanism has been with AI
since the beginning and is illustrated by the differing views of two
of AI's founders, John McCarthy and Marvin Minsky, on the question of
the essential nature of knowledge in AI systems. McCarthy stresses
the propositional content of an agent's knowledge, while Minsky
stresses the active, dynamic nature of machines that cope, especially
machines constructed out of simpler machines that cope with more
specialized classes of situations. Briefly, McCarthy views knowledge
as a collection of facts about situations, while Minksy holds that
knowledge is a collection of mechanisms for handling situations. This
is not the neat-scruffy distinction in disguise, as there are neat
*and* scruffy theories in both camps. One of the aims of
situated-automata theory is to bridge the gap by defining a rigorous
infromational semantics for arbitrary machines embedded in
environments.
Part of the hidden agenda behind this research is to make it
legitimate for AI theorists concerned with issues of content to again
turn their attention to specific issues of mechanisms while
maintaining semantic rigor. As a computer scientist (and
technologist) I am concerned about the tendency of theoretical
AI researchers to become absorbed in pure logic and philosophy. Some
of the topics which are currently occupying the attention of a large
part of the theoretical AI community are self-reference, non-monotonic
reasoning, formal models of time, causality and action, and the
formalization of other commonsense concepts. Almost any of these
could be studied equally well (perhaps better) by a professional
philosopher or logician. Good work on these topics contributes to our
general understanding of the *content* of reasoning, and is
certainly necessary, but adds little to our understanding of the
mechanisms, except in the trivial sense that any formal theory is
grist for the automated deduction mill.
It concerns me that the most philosophically and logically
sophisticated of AI researchers have let their research agendas become
dominated by non-computational issues. I do not mean by this that
theoretical AI researchers program less than they used to. For all I
know, they program more now than ever before. But an AI researcher is
more than a philosopher with a Lisp machine. The special role of AI
is both to develop the technology of intelligent machines and to
discover instances where the fundamental computational nature of a
mechanism illuminates some otherwise unexplained phenomenon or greatly
simplifies the explanation.
Of course, in retrospect, with the intermingling of computational and
philosophical concerns, it was inevitable that some AI researchers
should take up purely philosophical questions. Undoubtedly there are
instances of drift in the other direction as well. What is clear is
that the tension between meaning and computational mechanism will be
accommodated in a more sophisticated technical way because of the
existence of CSLI and other similar institutions.
-----------
end of part 1 of 7
-------
∂24-Jun-86 1748 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 2
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 17:48:43 PDT
Date: Tue 24 Jun 86 15:30:27-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: CSLI Monthly, Vol. 1, No. 4, part 2
To: newsreaders@SU-CSLI.ARPA
------------------
PROJECT REPORTS
RATIONAL AGENCY (RatAg)
Michael Bratman and Amy Lansky
Project Participants: Michael Bratman (Project Leader), Philip
Cohen, Lewis Creary, Todd Davies, Charles
Dresser, Michael Georgeff, Pat Hayes,
David Israel, Kurt Konolige, Amy Lansky,
Robert Moore, Nils Nilsson, John Perry,
Martha Pollack
The Rational Agency (RatAg) project has focused on the question:
``What should an architecture or model of a rational agent look
like?'' Philosophers of practical reason and researchers in
artificial intelligence have both been concerned with rational
behavior, the former in describing its general nature in humans, the
latter in building machines that embody it. This working group has
brought together researchers from these two disciplines. Over the
past year we have met biweekly to discuss the components of rational
agency and their interaction. We have found that, while the
philosophical and AI approaches have attacked the problem of
understanding rational agency from quite different perspectives, they
are actually now at a point of convergence. In this report we will
discuss our findings in this regard and present some of our research
results to date.
The Components of Rational Agency
In its basic form, rational behavior is the production of actions
that further the goals of an agent, based on that agent's perception
of the world. Consider the problem of rational behavior facing
someone who, while on her way to her car to drive to a concert,
notices her neighbor unsuccessfully attempting to start his car. The
former agent---let us call her Smith---must form a coherent picture of
the world based upon her beliefs and her perceptions, so that she
comes to believe that her neighbor may need her help. She then needs
to consider her various desires---her wish to test the jumper cables
she recently bought, her wish to be a helpful person and good
neighbor, and her wish to get to the concert on time---to determine
what action she should take. She may decide that, if she stops to
assist her neighbor, she will, by so doing, satisfy the first desire
and contribute to satisfying the second, but will cause the third to
be unsatisfiable; and she may decide that, if she instead continues on
her way to the concert, she will satisfy the third desire but fail to
satisfy the first two. If she thinks that being helpful is the most
important of her relevant desires, she will, if she is rational, form
a further desire, namely, to stop and help her neighbor; this desire
will then result in action.
This story suggests a general framework for describing rational
behavior: a model with three main components---perception, the
psychological attitudes of belief and desire, and action. It also
suggests the kinds of functional dependencies that relate these
components in a rational agent: perception, for example, gives rise to
beliefs; beliefs and desires give rise to further desires; specific
desires give rise to actions. Let us call this the belief-desire (BD)
architecture. The assumption of this sort of commonsense
psychological architecture allows us to account for our explanations
of everyday behavior: ``Smith wanted to be helpful, so she stopped to
give her neighbor a jump start, even though that made her late for the
concert.'' Yet, as specified so far, the architecture is vague---too
vague for AI researchers to incorporate into robots that will behave
as Smith does.
The goal of our research has been to develop a model of rational
agency, providing a detailed and systematic account of the functional
dependencies among perception, psychological attitudes, and rational
action. We have been working on the development of an account that is
sufficient both to drive a model of rational agency---to furnish a
specification, if you will, for an autonomous, rational robot---and to
facilitate a critical look at existing AI systems for planning and
executing actions.
One of the primary questions we have asked is, what exactly are the
primitive components of rationality? From a philosophical
perspective, this is equivalent to asking what the set of primitive
mental states must be to describe human rationality; from an AI
perspective, this is equivalent to asking what the set of primitive
mental operators must be to build an artificial agent who behaves
rationally.
We have agreed that the philospher's traditional 2-parameter model,
containing just beliefs and desires, is insufficient. In particular,
we have begun to see the need for the addition of two more
parameters---intentions and plans (Bratman, 1986, in prep.; Pollack,
1986a, 1986b, 1986c).
One of the most compelling reasons for the addition of these two
components is the fact that agents are `resource bounded'---they
cannot do arbitrarily large computations in finite time. In the next
section we elaborate this argument, as well as cite other reasons for
the addition of intentions and plans to a model of rational agency.
We also discuss our research into the interactions between intentions,
plans, beliefs, and desires in a theory of rationality.
An Architecture of Rational Action
In our exploration of the components of rationality and how they
interconnect, we have begun to see a striking convergence of the
philosophical and AI approaches to this problem.
In the philosophy of practical reason, there is a long tradition of
accepting something like a BD architecture. Within this tradition,
the commonsense notion of intention is seen as directly reducible to
beliefs and desires. However, over the last fifteen years or so,
several philosophers, including Michael Bratman, have begun to argue
that other components must be added to cognitive models of rational
agents (Bratman, 1986, in prep.). Two phenomena have led to these
claims.
The first of these phenomena is resource boundedness. If agents
were not resource bounded, they might, at each instant of time, weigh
all their beliefs and desires in order to determine which action
currently available to them would do the most to advance their goals.
In reality, however, agents do not have an arbitrarily long time to
decide how to act. The world changes around them while they consider,
and they cannot continually re-evaluate the consequences of their
beliefs and desires if they are to keep pace with those changes. Even
assuming that the agent has large computational resources at her
disposal, weighing the situation at hand every few moments would
render her immobile in a rapidly changing world.
A further demand upon rational agents stems from the need to
coordinate their own activities, as well as to coordinate their
activities with those of other agents. Consider again Smith, who
manages to give a lecture, finish writing an article, pick up her
clothes at the cleaner's, and then set out for the concert. In
addition to coordinating her own activities to achieve a complex set
of goals, she needs also to coordinate her activities with those of
others. She may, for example, have arranged to meet her friend Jones
at the library after the concert. Smith counts on Jones's meeting
her; likewise Jones counts on Smith's meeting him. Their expectations
will normally be based on something stronger than simply their beliefs
about each other's desires. For example, it is possible that since
the time Smith last communicated with Jones, something has arisen that
is more desirable to Jones than his meeting Smith. But normally Smith
does not need to stop and consider this possibility.
To meet the challenges presented by our being resource bounded and
our having a need for both social and intrapersonal coordination, our
group has hypothesized that humans are essentially planning creatures,
i.e., that our cognitive architecture includes plans as well as
beliefs and desires. Plans represent precomputed decisions to act in
certain ways. Once Smith has formed a plan to stop on her way to the
concert and assist her neighbor with the flat tire she does not need
to weigh the situation at hand in an unfocused way. Only under
unusual circumstances---for example, noticing that a tow truck is
approaching---does she need to reconsider her plan. The very fact of
having a plan carries with it a certain commitment. Thus, Smith and
Jones can achieve their common goal of meeting at the library after
the concert because each has a plan to do so, and believes that the
other has one also.
So, agents must coordinate their many goal-directed activities, and
must do so in ways that are compatible with their limited capacities
for deliberation and information processing. Together these demands
suggest that agents form plans. But a different type of limitation
that affects agents also influences the nature of their plans. Agents
are neither prescient nor omniscient. The world may change around
them in ways they are not in a position to anticipate; hence highly
detailed plans about the far future will often be of little use and
not worth bothering about.
As a consequence, plans will typically be `partial'. This
partiality reveals itself in at least two different ways. First of
all, one's plan for the future frequently will account for some
periods of time and not for others. A second type of partiality
results from the hierarchical nature of plans. For example, we often
decide first on the relatively general ends of a plan, leaving open to
deliberation more specific questions about means and preliminary
steps. If we view plans as being composed of smaller
elements---`intentions'---we see that it is characteristic for agents
to reason from prior intentions to further ones. In such reasoning an
agent fills in her partial plans in ways required for them
successfully to guide her conduct.
Plans, as we conceive of them, are also subject to two kinds of
constraints: `consistency constraints' and the requirements of
`means-ends coherence'. An agent's plans need to be consistent both
internally and with her beliefs. Roughly speaking, it should be
possible for an agent's plans, taken together, to be executed
successfully in a world in which her beliefs are true. As a result of
this consistency requirement, prior plans not under reconsideration
can be seen to `constrain' subsequent plans, providing what might be
termed a `filter of admissibility' on options. (Cohen and
Hector Levesque have recently attempted to formalize this idea using a
model theoretic approach.) Second, though partial, plans need to be
filled in to a certain extent, as time goes by, with subplans
concerning means, preliminary steps, and relatively specific courses
of action. These subplans must be at least as extensive as the agent
believes necessary to execute the plan successfully. Otherwise the
plan will suffer from means-end incoherence.
In sum, there emerges from recent philosophical work a picture of
the process of `intention formation'. Agents are seen as being
motivated to form intentions to satisfy the requirements of means-end
coherence; they are also seen as being constrained by consistency
requirements to form only those intentions that can pass through the
filter of admissibility established by their prior intentions. But
many details of this picture remain to be worked out. In particular,
philosophers have by and large not addressed the details of the
means-end reasoning process, or what we might call `intention
realization': they have not specified how an agent can decide what
further intentions can count as means to, or preliminary steps for,
his prior intentions. But there is a large body of work within AI
that can be seen as dealing with just this question.
One of the ways the RatAg group has approached the problem of
understanding rational agency has been to actually examine existing AI
planning systems---an approach we have called ``robot psychology''
(Konolige, 1985b). Researchers of AI have taken planning seriously
almost since the field's inception. A number of techniques have been
developed for representing the effects of actions, as well as for
computing an action or actions that will achieve some goal. There are
even approaches to planning that have the capability to deal with
interactions among parts of a plan or among plans (Georgeff, 1985,
1986; Lansky, 1985a, 1985b, in prep.).
Yet there is a real difference between the plans constructed by
most existing AI systems and the sort of plans we discussed earlier.
The plans built by AI systems have often been hierarchical, but they
have not been partial. Instead, most AI planning systems expand plans
to a given level of detail as defined by the ``primitive'' operators
of the system. The level of detail in the plans constructed is
uniform, no matter how far into the future that plan extends.
In practice, however, forming complete plans prior to execution is
usually infeasible. Neither automatic planning systems nor their
designers are prescient or omniscient. Consequently they are unable
to anticipate all the quirks of a real-world environment. Inevitably,
the original capabilities of practical planning systems have had to be
augmented to allow for the monitoring of plan execution and for
replanning.
Although it is certainly important to be able to monitor one's
actions and to replan when the environment turns out to be different
than what one expected, too much replanning can be quite costly. As
we have seen, it is simply ineffective to plan a long way ahead to a
uniform level of detail; it is usually wiser to form a partial plan,
waiting to see what the world is like before expanding further. This
is one desirable feature of rational planners that traditional AI
planning systems do not exhibit. A second desirable feature is the
ability to respond to newly perceived facts that may entirely change
one's task priorities. Traditional AI planning systems, once they
adopt a plan, are unable to change their goal---the most they can do,
if they have some sort of replanning capabilites, is to replan to
achieve the same goal they originally set out to achieve.
In response to these issues there have been recent attempts at
developing mixed planning/execution systems, sometimes called
`reactive planners'. These systems construct plans that are
`partial', in exactly the sense we described earlier. When plans are
initially formed by a reactive planner, they are only expanded to a
level of detail that seems reasonable, given the information available
at the time. Plan expansion is dynamic: details are added during the
execution process itself as more information becomes available. Such
information can also result in the system abandoning its attempts to
achieve an existing goal.
Once planning is allowed to be intermixed with execution, however,
the problem of resource boundedness again rears its head. These new
systems must have some way of ensuring that some execution actually
occurs and that they do not get stuck in continual attempts to compute
the best option, without ever performing it (or even beginning to).
They can best do this, we claim, by incorporating a view of plans akin
to the one we outlined earlier, in which prior plans both pose
additional planning problems and constrain acceptable solutions to
them.
One example of a reactive planning system is the Procedural
Reasoning System (PRS) being developed at SRI International by
Georgeff and Lansky (Georgeff and Lansky, 1986a, 1986b; Georgeff,
Lansky, Bessiere, 1985; Georgeff, Lansky, Schoppers, 1986). It is
instructive to consider briefly how PRS operates. PRS, like any
planning system, begins by adopting a goal G. However, being a
reactive planner, PRS does not then build a complete plan for
achieving G. Rather, G is associated with a precomputed method for
achieving it, which may be either a so-called basic action or a
sequence of subgoals G1, ..., Gn. In the former case, PRS can simply
execute the action. It is when the method associated with G is of the
latter type that the mixed planning/execution nature of PRS becomes
evident. PRS will begin, in this case, by retrieving the methods
associated with G1, selecting one, and executing it. This selection
process can use reflective reasoning to choose the best possible
method for achieving G1. Of course, since the method selected may
itself consist of a sequence of subgoals, the process of method
selection and execution may have to be repeated several times before
G1 is achieved. Only then will the process be repeated for G2. The
low-level actions used to realize a future subgoal, for instance G5,
do not need to be determined prematurely by PRS. This reflects well
the observation made earlier that highly detailed plans about the far
future are not in general worthwhile.
Reflective reasoning is thus used by PRS to fill in the details of
a partial plan by determining the best method for achieving a
previously selected goal. It is also used to allow PRS to change its
goals when the situation warrants it. Whenever PRS is reflecting on
which procedure best meets its needs, it can also decide to abandon
its current plan and do someting else instead. It is therefore able
to modify its plans of action rapidly on the basis of what it
currently perceives.
PRS avoids both the problems of traditional AI planning systems
described above. Of particular interest is its use of reflective
reasoning, which can be seen as embodying a mechanism for the
plan-formation process. Reflective plans can be used to encode the
principles of rational plan formation---principles whose outlines have
been developed in recent philosophical work, and whose further
elaboration we are pursuing.
In particular, we are currently working on a formal description of
PRS strictly in terms of axioms concerning beliefs, desires, and
intentions, as well as their interactions with one another and with
perceptual input and rational action (Georgeff and Lansky, in prep.).
In this analysis, the various components of the existing system are
associated with components of a cognitive model. Axioms that describe
the various active components in the system correspond to principles
of rationality. We intend to continue development of this
axiomatization and to use it as a guide for extending and
restructuring the system. In this way, we hope to expedite the
construction of a rational artificial agent.
Finally, we conclude by looking briefly at our work to date on the
interaction between intentions and beliefs about the future. Both
Konolige and Israel have done substantial work on the combinatorial
properties of the primitive components of rationality (Konolige,
1985a; Israel, in prep. a, in prep. b). Bratman has argued that
agents do not necessarily intend the expected side effects of their
intentions. Cohen and Levesque (1985, in prep. a, in prep. b) have
provided a formal analysis of a concept approximating intention that
shows how an agent's `persistence', modeled as a set of constraints on
intention revision, blocks side effects from being intended. Even a
fanatical agent, who keeps trying to achieve his persistent goal until
he believes it to be satisfied or until he believes it to be
impossible, will not keep trying to achieve what is merely an expected
side effect.
The ability to state conditions under which an agent can drop an
intention points to an analysis of an agent's interlocking commitments
within the agent itself as well as with other agents. Intention
revision can thus be triggered off an agent's dropping intentions, his
believing certain applicability conditions are false, or his believing
that some other agent has dropped (or even adopted) a given intention.
Cohen and Levesque argue that communicative acts such as requesting
and promising should be analyzed in terms of such interlocking
intentions.
References
Bratman, M. 1986. Intention and Commitment. Invited Address, APA
Pacific Division Meetings.
Bratman, M. In preparation. Intention, Plans and Practical Reason.
Cambridge, Mass.: Harvard University Press.
Cohen, P. and Levesque, H. 1985. Speech Acts and Rationality. In
Proceedings of the 23rd Annual Meeting of the ACL. Also in M.
Genesereth and M. Ginsberg (Eds.), Proceedings of the Distributed
Artificial Intelligence Workshop.
Cohen, P. and Levesque, H. In preparation, a. Communication as Rational
Interaction.
Cohen, P. and Levesque, H. In preparation, b. Persistence, Intention
and Commitment in Rational Interaction.
Gerogeff, M. 1985. A Theory of Process. In Proceedings of the 1985
Distributed Artificial Intelligence Workshop. Sea Ranch, Calif.
Georgeff, M., Lansky, A., and Bessiere, P. 1985. A Procedural Logic.
In Proceedings of IJCAI 9, 516--523.
Georgeff, M. 1986. The Representation of Events in Multiagent Domains.
In Proceedings of the National Conference on Artificial Intelligence.
AAAI.
Georgeff, M. and Lansky, A. 1986a. Procedural Knowledge. To appear in
Proceedings of IEEE Special Issue on Knowledge Representation.
Georgeff, M. and Lansky, A. 1986b. System for Reasoning in Dynamic
Domains: Fault Diagnosis on the Space Shuttle. SRI Tech. Note 375.
Georgeff, M., Lansky A., and Schoppers, M. 1986. Reasoning and
Planning in Dynamic Domains: An Experiment with a Robot. SRI Tech.
Rep.
Georgeff, M. and Lansky, A. In preparation. A Cognitive Representation
of the Procedural Reasoning System. SRI Tech. Rep.
Israel, D. In preparation, a. Intentional Realism Naturalized.
Israel, D. In preparation, b. On the Paradox of the Surprise
Examination: Problems and Beliefs about One's Own Future. SRI Tech.
Rep.
Konolige, K. 1985a. Belief and Incompleteness. In J. Hobbs and R.
Moore (Eds.), Formal Theories of the Commonsense World. Norwood, N.
J.: Ablex, 359--404.
Konolige, K. 1985b. Experimental Robot Psychology. SRI Tech. Note 363.
Lansky, A. 1985a. A `Behavioral' Approach to Multiagent Domains.
In Proceedings of the 1985 Distributed Artificial Intelligence Workshop.
Sea Ranch, Calif., 159--183.
Lansky, A. 1985b. Behavioral Specification and Planning for Multiagent
Domains. SRI Tech. Note 360.
Lansky, A. In preparation. A Model of Parallel Activity Based on
Events, Structure, Causality, and Time.
Pollack, M. 1986a. Inferring Domain Plans in Question-Answering.
Doctoral dissertation, University of Pennsylvania.
Pollack, M. 1986b. A Model of Plan Inference that Distinguishes
Between the Beliefs of Actors and Observers. To appear in Proceedings
of the 24th Annual Meeting of ACL.
Pollack, M. 1986c. Some Requirements for a Model of the Plan Inference
Process in Conversation. In R. Reilly (Ed.), Communication Failure in
Dialogue. Amsterdam: North-Holland.
-----------
end of part 2 of 7
-------
∂24-Jun-86 1904 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol 1., No. 4, part 3
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 19:04:24 PDT
Date: Tue 24 Jun 86 15:31:43-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: CSLI Monthly, Vol 1., No. 4, part 3
To: newsreaders@SU-CSLI.ARPA
EMBEDDED COMPUTATION (EC)
Brian C. Smith
Project Participants: Curtis Abbott, Carol Cleland, Michael Dixon,
Kenneth Olson, Brian C. Smith (Project Leader),
Tayloe Stansbury
The Embedded Computation (EC) project has two long-term goals: to
develop a theory of computation adequate to the phenomena of situated
agents, and to design specific computational architectures consonant
with that theory. Progress has been made on both fronts. In both
cases we have moved from general intuitions and overall requirements
to a specific conceptual structure in terms of which we are now
working out details.
Theory
In the original proposal for CSLI research, we claimed that
``computation was fundamentally linguistic.'' We have now unpacked
this insight into a set of more specific interlocking claims, in part
because of a richer understanding of the term ``linguistic.''
Initially, we used the phrase ``linguistic'' in a broad sense, as
is common in artificial intelligence, cognitive science, and computer
science. We had been impressed with the large number of semantical or
intentional relations (situations where one structure stands for or
signifies another) that arise in even the simplest computational
systems. Examples include the many relations among internal data
structures, visual representations, programs, specifications,
implementations, etc. Given this set of phenomena, it was natural to
view all these structures as analogous to language, for several
reasons. For one thing, this understanding was implicit in technical
jargon (``programming languages,'' ``formal symbol manipulation,''
``mentalese,'' etc.). For another, virtually all well-developed
theoretical techniques for analyzing semantic relations were developed
for purposes of language analysis (especially model theory and various
forms of denotational semantics). Finally, the idea that the internal
structures in a reasoning system should be viewed linguistically is an
explicit and popular hypothesis in AI.
It is an important fact about CSLI, however, that its research
community involves linguists and philosophers who study the
paradigmatic case of language: structures used by humans to
communicate among themselves. From this direction came a pressure to
use ``language'' in a narrower, more focused sense, with specific
properties having to do with communication, public or conventional
use, etc. At the same time, the linguistic approach to internal
states and structures of computational processes came under rather
severe scrutiny, this time from two sources. First, a
nonrepresentational approach to computation is increasingly being
espoused by theoretical computer scientists, including some of those
at CSLI (primarily Joseph Goguen and Jose Meseguer). Second, in
conjunction with others at CSLI, we came to realize that a narrowly
linguistic approach to internal structures was also inadequate in AI
and cognitive psychology.
The result of these pressures was to split our general notion of
language into two parts. On the one hand, we embraced a more general
notion of representation, which included language as a special case,
but also encompassed models, images, simulations, and a wide variety
of other ``semantical'' phenomena. We realized that an analysis of
the full complex of semantical relations in computer systems would
require the prior development of such a theory of representation,
which would include analyses of correspondence, of modeling, and
various related subjects. On the other hand, we also recognized that
these theories would not be specific to computation. As a result, a
project specifically dedicated to those ends (the Representation and
Reasoning project) was split off from the Embedded Computation group,
and was described in an earlier issue of the Monthly (Vol. 1, No. 2).
Given this development, the goal of the Embedded Computation
project is to employ these more general representational techniques in
analyzing computational systems as a whole. The first attempt to
analyze systems in these terms is presented in Brian Smith's paper on
correspondence (1986b) which documents the inadequacy of traditional
semantical techniques (particularly those of model theory), and
proposes a more fine-grained but flexible theory of general
correspondence. The general contextual dependence of computation and
reasoning is also analyzed in Smith's (1986c) paper
that attempts to derive a variety of kinds of computational
self-reference as solutions to the problem posed by an agent's
attempting to extricate itself from its own circumstantial dependence.
A larger project is reported in Smith's book (forthcoming) which
argues that it is impossible to maintain the overwhelmingly popular
view that computation is ``formal,'' no matter what reading of that
term one chooses. In terms of the present analysis, this conclusion
can be interpreted in a variety of ways. First, if ``formal'' is
taken to mean ``independent of context,'' as Jon Barwise has suggested
(in a discussion of logical inference; Barwise, in press), then many
current systems are patently not formal. Smith shows that they also
violate formality if it is taken to mean ``operates independently of
semantic interpretation'' (except in such a weak sense that every
possible physical object, including people, *must* be formal). In this
and several more cases, the point is once again that semantical
techniques originally developed for formal languages are inadequate to
the computational case.
As well as challenging received views of formality, Smith's book
also challenges all three reigning theories of computation (formal
symbol manipulation, recursive function theory, and automata theory).
In contrast, it sketches an alternative theory that rests explicitly
on a representational foundation, and deals directly with physical
embodiment. In this concern with the causal foundation of
computation, and in the rejection of a narrowly ``linguistic''
notion of internal representation, Smith's project is similar to
Stanley Rosenschein and Leslie Kaelbling's work in the Situated
Automata project. The major initial difference between the two has to
do with the stance towards representation: Rosenschein and Kaelbling
explicitly attempt to set representation aside; Smith's approach is to
revamp the very notion of representation in such a way as to make a
representational theory of computation tenable.
As well as paving the way towards new theories of computation, and
new semantical techniques for analyzing current systems and practices,
there is another benefit to viewing language as just one particular
instance of representation more generally: structures that really are
languages can be treated as such, in all their specificity, rather
than being incorporated into a vague, more general notion. In the
computational realm, this naturally leads us to distinguish:
(a) The languages we use to specify, engender, and interact with
computer system
(b) The structure and behavior of computational processes
themselves
Thus a program for an automatic system to land planes at the San
Francisco Airport would be a case of the former; the system itself an
instance of the latter. Both entities, being meaningful,
information-bearing, significant artifacts, require semantical
analysis. Thus we might ask the following sorts of questions about
the former: exactly what signals does it lead the system to send out;
what sorts of scoping mechanisms and variable binding does it employ;
what are the semantics of its if-then-else construct? About the
latter we might ask: what do those signals mean; how many planes can
it track before becoming overloaded; what plane out there in the sky
does some particular data structure actually refer to; does it know
about the hurricane over Oakland?
In traditional computer science these questions would be studied
together. Our approach, however, enables us to treat them separately,
which clarifies a number of issues. Consider, for example, the
important role of context in determining the significance of any
representational structure. The point is that the kinds of context
relevant to the specification-relation are different from the kinds of
context relevant to the process-world relation. For example, the
meaning of the program fragment ``PRE-CLEARED(FLIGHT-PATH[Xj])'' may
depend on definitions in other modules in the whole specification
package. On the other hand, what particular airplane was signified by
a given data structure referred to on the morning of July 27th, 1984,
may depend not on facts about the linguistic context, but on facts
about air traffic in the Bay Area on that date. Similarly, the
mechanisms in which these contextual facts play their determining
roles are clearly of radically different kinds.
The progress we have made in structuring the enterprise, and
identifying different semantical contributions, will greatly help in
our development of a theory of embedded computation. In addition, the
Embedded Computation group will continue to work closely with the
Representation and Reasoning group on specific semantical techniques.
Finally, we also retain a commitment to apply the results of our
analysis in the wider social and intellectual sphere. Two papers of
this sort have been prepared. The first (Smith, 1985) analyzes the
notion of computational ``correctness,'' showing how misleading uses of
this term derive from exactly the sorts of semantical confusion we
have been clarifying---in this case from a combination of an
uncritical use of model-theoretic techniques and a confusion of the
program-process and process-world relations. The second (Smith,
1986a) undertakes an analysis of the very notion of a ``technical
problem,'' arguing that our emerging understanding of situated agents,
representation, and computation challenges the widespread view that
questions about computation divide neatly into ``technical'' and
``social'' categories.
System Design
The second part of the Embedded Computation project focuses on
system design. Two specific systems have been explored in the past
year: a ``Situated Inference Engine,'' (SIE) an architecture, being
developed in collaboration with the Situation Theory and Situation
Semantics (STASS) project, that is designed to manifest a theory of
situated inference; and the ``Membrane'' language, a study in
designing a modern type-based computer language that deals explicitly
with the interacting semantic demands of different kinds of linguistic
interactions with machines.
The Situated Inference Engine
Theories of inference based on mathematical logic are able to
sidestep considerations of various sorts of context. First, as
described in Barwise (in press), the semantic interpretation of
logical formulae is viewed as essentially independent of the context
of use. Thus one does not deal with a formula such as TO-THE-RIGHT(X)
where circumstantial facts are essential to the formula's
interpretation. Second, although proofs are often viewed as sequences
of expressions, those expressions are not treated as linguistic
discourses, in the sense of establishing linguistic contexts that can
be exploited by subsequent expressions. Thus, logical languages
typically do not have the richness of anaphoric constructs that
natural language does or even a notion of subject matter. Third,
inference is viewed as dependent on only a fixed set of premises or
axioms; there is no provision for dealing with unfolding
conversational contexts, with the addition of new or contradictory
information, with explicit requests, etc.
In contrast, human inference---especially if one takes inference
very broadly as the general process of developing semantically
motivated conclusions in appropriate circumstances based on available
information---violates all these assumptions, and as such is a much
more complex subject matter.
It is part of the long-range goal of the EC and STASS projects to
develop a theory of `situated inference' that deals directly with the
sorts of contextual dependence mentioned above, so as at least to
illuminate the more complex human case. This theory will also deal
with a richer analysis of consequence relations, based on different
kinds of involvement relations. Thus the situation of 10's being the
product of 2 and 5 may `logically' involve 10's being even, whereas
someone's talking to the director of CSLI may much more conditionally
involve that person's being in Palo Alto.
The Situated Inference project is an attempt to build a
computational system that is able to engage in simple forms of
situated inference. Although its design may involve such
architectural considerations as the use of parallel computation,
unification, term rewriting rules, constraint-based systems, etc., the
primary goal is to develop semantical techniques adequate to describe
situated inference. The basic model will be conversational---of a
person issuing utterances to the SIE, to which the SIE will produce
appropriate replies. These utterances may be questions, may convey
new information, or may ask the hearer to perform certain actions.
The initial subject domain will be one of schedules and calendars;
thus we imagine saying to the SIE (in an appropriate stylized
language) ``I have an appointment in an hour with Bill Miller,'' or
``Am I free for lunch on Wednesday?'' Both cases involve contextual
interpretation; the design goal is to have the system respond
appropriately to the contextually determined meaning, not merely to
the form of the query.
Though the SIE is at an early stage of development, several
important design issues have emerged. For example, there is a
tendency, in traditional system design, to assume that contextual
dependencies in the input language should be fleshed out
(``disambiguated'') in the course of internalizing queries or
assertions into a form suitable for internal processing. Thus one
might imagine that the noun ``Wednesday,'' in the example given in the
previous paragraph, would be converted to a unique internal identifier
(i.e., with the week or day of the month filled in). On the other
hand, as argued, for example, in Smith (1986c), there are good reasons
to presume that the interpretation of internal structures is itself
contextually sensitive, and that the idea of a ``canonical'' or
``contextually independent'' internal form is ultimately untenable.
For example, imagine designing a ``situated telephone assistant.''
Just because some phone numbers might need to be internally
represented with leading country codes, it does not follow that all of
them do. A far more reasonable design decision would be to assume
that numbers without an explicit representation of country should be
interpreted to be `in whatever country the system itself resides', and
then to provide the facility for the system to make the country code
explicit when that matters. This is not, of course, a radical design
idea; system programmers will recognize it as standard practice. The
point, rather, is to develop our theories of semantics and inference
to the point where they are able to comprehend and explain this
natural use of implicit context in computation and reasoning.
This example illustrates a very general fact about the SIE, which
distinguishes it from previous systems, including not only inference
machines but also natural language understanding systems and query
systems more generally. In particular, the nature of our theoretical
analysis of its structures and operations is quite different, and at
times much more complex than the ingredient structures themselves.
Another example is provided by our analysis of its conversations. We
distinguish four kinds of situation in terms of which to understand
linguistic utterances. In particular, as well as recognizing the
utterance situation itself, we recognize:
o The grammatical situation, containing facts of grammar
and language relevant to the utterance at hand
o The discourse situation, containing facts about references,
historical structure of the discourse, etc.
o The described situation, which is the subject situation
in the world that the utterance is about (such as my lunch
with Bill Miller)
o The background situation, containing large numbers of
constraints, background assumptions, etc.
All of these situations, their constituent facts, relations among
them, etc., play a role in determining the full semantical
significance of the utterance. Relations among utterances, such as
when a reply follows directly from a question, can also be stated in
terms of constraints on instances of this general interpretation
scheme. On the other hand, there is no reason to suppose, in general,
that these four situations need be explicitly represented within the
SIE. For example, the grammatical facts about the language might not
need to be represented explicitly if its parsing mechanism was
``hardwired'' to accept this and only this language. On the other
hand, there will clearly be some facts, such as the name of the person
one is scheduled to meet in an hour, that are likely candidates for
more explicit representation. As the design of the SIE proceeds, we
hope to develop a theoretical framework that will explain how and when
facts need explicit representation, as well as providing guidelines
for the system's moving flexibly from implicit to explicit
representation when circumstances demand (Smith, 1986c).
----------
end of part 3 of 7
-------
∂24-Jun-86 2001 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 4
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 20:00:53 PDT
Date: Tue 24 Jun 86 15:32:47-PDT
From: Jamie Marks <JAMIE@SU-CSLI.ARPA>
Subject: CSLI Monthly, Vol. 1, No. 4, part 4
To: newsreaders@SU-CSLI.ARPA
Membrane
The programming of general purpose computers allows a degree of
flexibility in their use that is so far unobtainable in any other way,
and so it is not surprising that the development of languages to
support and facilitate the programming process is a major area of
computer science research. The traditional distinction between
interactive (or ``interpreted'') and batch (or ``compiled'') languages
has become increasingly unrecognizable with the development of
sophisticated programming environments that allow the combination of
interpreted and compiled modes, employ ``run-time'' compilers, and
provide other tools for debugging, monitoring performance, etc. It is
important to note, however, that two quite different models of the
programming process underlie the traditional distinction---one in
which the programmer has a kind of conversation with the computer
aimed at clarifying and solving some problem, the other in which the
programmer translates a relatively clear statement of a problem into
codes acceptable to the computer; codes that will make it behave in a
specified way. It is increasingly being recognized that one should
not have to choose between these two models, but rather that each is
relevant to programming, corresponding to a ``role'' that any language
designed to support it must necessarily play. Call these the
`conversational' and `programmatic' roles, respectively. We believe
the conversational approach taken in the Situated Inference Engine
project can be applied to the programming process more generally,
thereby extending our understanding of how to support that role or
model. The Membrane project, in contrast, is an attempt to develop
the programmatic role in light of the same sorts of general concerns
and ideas laid out above.
Thus, we are concerned with the definition of languages adequate
for describing problems of the sort that arise in programming, and the
development of adequate semantical accounts of them. Curtis Abbott is
focusing his concern on the development of a particular language
called Membrane (Abbott, 1986a, 1986c, 1986d). An important premise
of this work is that the common concern with making a (computing)
machine behave in a particular way should be explicit rather than
being absorbed into a notion of procedure (or function) in which the
computational agent is hidden in the background. Therefore, he wants a
language whose expressions designate objects, not only the ordinary
mathematical objects---functions, sets, lists, numbers, and so on, but
also computational agents and their processes. In this language, it
should be possible to directly express relationships between machines
and processes, among structurally diverse machines that have
recognizably similar behavior, between functions and machines that
compute them, and so on. Although the discipline of being everywhere
explicit about all of these objects and relationships will be
intolerable in a practical setting, we believe the approach needs to
be integrated into the languages that are used for programming rather
than only appearing in theories about it, so that we can be explicit
about them when it is appropriate, and can decide in a principled way
what circumstances justify making certain objects and relationships
implicit.
Abbott uses a notion of ``type'' to organize the domain of abstract
objects that grounds his semantical account of Membrane. There is a
fairly lively controversy in this field about whether the existence of
low complexity decision procedures for typechecking problems should
affect the notion of type itself. In this, we come down heavily on
the side that says it should not. Indeed, we claim that types should
be definable by arbitrary predicates over previously defined types.
In the technical development of our notion of type, we show that this
provision for predicated subtypes allows us to simplify the type
description language somewhat. Specifically, in the
lambda-calculus-based type systems, there is usually one variable
binding operator for type abstraction and another for functional
abstraction, and the type abstraction operator is needed to express
dependent types, such as the type of products of a type, T, and an
object of type T. Given that Type is a type, we can express such
types in Membrane using only the standard function abstraction
operator to define predicates. Another somewhat unorthodox feature of
the work on types is the sort of model given for the system (Abbott,
1986b). This has always been a delicate issue for systems which allow
self-application as exemplified in the type of types being itself a
type. We have found that a model based on Peter Aczel's theory of
nonwellfounded sets allows for a very direct expression of the
circularities involved, without resort to the cleverness that is
needed to give models based on ordered sets.
Even though issues of computability and computational complexity
are less immediate in a semantical account of Membrane than would be
the case for a programming language, the divergence between a usefully
concise formal language and one that is convenient for the standard,
compositional mechanisms of formal semantics becomes evident very
quickly. The approach taken to this problem is to translate
ordinary Membrane expressions into a more basic, unambiguous version
of the language. We have tried to explicate a variety of distinct
mechanisms which can be put together to obtain considerable expressive
flexibility without sacrificing rigor. These include a version of
type-based disambiguation of occurrences of atomic symbols (otherwise
known as ``overloading''), type inference, syntax-directed rewriting,
translation of apparently dependent types into the appropriate
predicated subtypes, etc. We have described each of these mechanisms
and explored the expressive style that results from putting them
together. While we believe the result is a reasonably convenient
language, our method is also intended to emphasize that other
mechanisms could easily be added without changing the language in a
fundamental way, and that we are not too committed to any particular
set of such mechanisms.
Other Projects
As well as pursuing its theoretical goals, the Embedded Computation
project tries to provide a forum in CSLI for the general exploration
and development of theories of a variety of computational subjects,
all within the general spirit of the Situated Language project. One
specific task we have taken on this past year has been the running, in
collaboration with STASS, of a weekly seminar called the Situated
Engine Company. This project was viewed in part as background for the
SIE development, but was also designed to broaden the scope of
possibilities for researchers throughout CSLI who are interested in
building computational models of information processing. The seminar
examined a wide variety of computational architectures:
object-oriented, constraint-based, and logic-based programming
languages; the Connection Machine; knowledge representation languages,
etc. It compared and contrasted the object-orientation of some of
these systems (SmallTalk, KL-ONE, etc.) with the relational
orientation of theories being developed elsewhere at CSLI, e.g.,
(Barwise, 1985; Stucky, 1986).
In addition, the group took on the task of specifying, in a single
integrated account, the full range of semantic facts relevant to a
hypothesized small robot (called ``Gullible'') capable of extremely simple
linguistic and ambulatory behavior in a gridlike world. Several quite
strikingly different solutions were proposed. As was to be expected,
different groups focused on the semantical aspects of greatest
familiarity to them: the structure of the language Gullible used,
abstract characterizations of Gullible's actions and internal states,
etc. The clearest result of the experiment---predicted in
advance---was that no single attempt was even near to being completely
successful. Among the important lessons learned were the following:
the importance of accounting directly for the semantical relations
implicit in abstract set-theoretic modeling, the lack of unanimity on
the best way to describe the internal states or structures of even a
simple computational process, the many different kinds of
circumstantial dependence that affect the meaning and behavior of a
situated agent (Smith, 1986b), etc. It was generally agreed that if
an adequate comprehensive account could be worked out in the coming
year, it would form the basis of a good text introducing what would be
involved in giving a comprehensive semantical analysis of a situated
language-using and information-processing agent.
References
Abbott, C. 1986a. A Formal Semantics for Membrane. ISL Tech. Memo, Xerox
PARC, forthcoming.
Abbott, C. 1986b. A Hyperset Model of a Polymorphic Type System. ISL
Tech. Memo, Xerox PARC, forthcoming.
Abbott, C. 1986c. Motivations for Membrane. ISL Tech. Memo, Xerox
PARC, forthcoming.
Abbott, C. 1986d. A Type System for Membrane. ISL Tech. Memo, Xerox
PARC, forthcoming.
Barwise, K. J. 1985. Notes on Situation Theory. CSLI Summer School
Course.
Barwise, K. J., In press. Information and Circumstance. Notre Dame
Journal of Formal Logic.
Smith, B. C. 1985. The Limits of Correctness. Presented at the
Symposium on Unintentional Nuclear War at the Fifth International
Conference of the International Physicians for the Prevention of
Nuclear War, Budapest. Reprinted in SIGCAS Newsletter (14)4, Dec.
1985. Also Rep. No. CSLI-85-36.
Smith, B. C. 1986a. Computer Science and Star Wars: What Counts as a
Technical Problem? Paper presented at the Sixth Canadian AI
Conference, Montreal, Canada; available from the author.
Smith, B. C. 1986b. The Correspondence Continuum. In Proceedings of
the Sixth Canadian AI Conference, Montreal, Canada. To be submitted
to Artificial Intelligence.
Smith, B. C. 1986c. Varieties of Self-Reference. In J. Halpern (Ed.),
Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning
about Knowledge. Los Altos, Calif.: Morgan/Kaufmann, 19--43. Revised
version submitted to Artificial Intelligence.
Smith, B. C. Forthcoming. Is Computation Formal? Cambridge, Mass.:
Bradford Books/The MIT Press.
Stucky, S. 1986. Interpreted Syntax: Part I, The Argument. To be
submitted to Linguistics and Philosophy.
-----------
end of part 4 of 7
-------
∂24-Jun-86 2114 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 5
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 21:14:01 PDT
Date: Tue 24 Jun 86 15:34:10-PDT
From:
Subject: CSLI Monthly, Vol. 1, No. 4, part 5
To: newsreaders@SU-CSLI.ARPA
ANALYSIS OF GRAPHICAL REPRESENTATION
David Levy
Project Participants: David Levy (Project Leader), Geoffrey Nunberg,
Kenneth Olson, Brian C. Smith, Tayloe Stansbury
This project is concerned with the nature of graphical
representation---with documents, in the broadest sense, viewed as
visual, information-bearing artifacts. While various disciplines
touch on aspects of this---linguistics, for example, addresses the
syntactic structure of ``text,'' generally as embodied in spoken (not
written) forms---none has taken the document as a subject matter in
its own right, nor provided the conceptual insights needed to ground
such an enterprise.
Yet this subject has come to assume a seminal role in modern
intellectual and economic life as the computer moves to replace the
pencil, the pen, the typewriter, and the printing press as our
predominant document preparation tool. Every such tool, from the
lowliest text editor on a personal computer, to the most sophisticated
layout engines used to design many of today's newspapers and
magazines, embodies an account of the nature of documents and their
preparation: each specifies the objects, properties, and relations
from which it takes documents to be composed, and provides the user
with a set of operations with which to compose documents.
Unfortunately, all such accounts are largely unprincipled, as a result
of which the tools now built, although impressive feats of
engineering, are idiosyncratic, incomplete, inflexible, difficult to
maintain, and difficult if not impossible to tailor.
During the last year two themes, embodiment and representation,
have assumed some importance in the development of our ideas. It has
been suggested at CSLI, for example, that what are taken to be
constraints on mind are actually constraints imposed by embodiment.
It is argued that theories must properly acknowledge the priority of
physical existence and embodiment over any abstractions derived from
it. In our case, this position manifests itself as a commitment to
the primacy of the document as a physical, information-bearing
(information-embodying) artifact. Abstractions over such physical
entities (such as the notion of ``text'' more about which below) and
representations of them play a secondary, derivative role.
It has also been suggested that the concept of representation may
play an important role in mediating the tension between ``(a) the
ubiquity of information, and (b) the specificity of language.'' As it
turns out, the domain of document preparation is permeated by
representational issues. Documents are, of course, representational
artifacts: the structure of marks, their presence and absence,
represent states of affairs in some domain of discourse. If we are to
come to ``understand'' documents, we must, for example, develop an
account of the alphabet as a system of graphical representation. (One
such attempt can be found in Nelson Goodman's Languages of Art.)
Representation issues also permeate the use of the computer as a
document preparation tool. Much of the power inherent in such tools
derives from the fact that we do not create documents directly, but
rather create representations from which documents can be realized.
As just noted, we have taken as the starting point of our
theoretical endeavor the primacy of actual documents---those physical
things that each of us can read or view. We are working toward a
definition of ``document'' that is broad enough to include books,
papers, and marked CRT screens, but narrow enough to exclude, say,
speech and dance. One of the interesting questions is whether the
concept of activity can be introduced into the definition without
broadening it to the point of vacuity. For the time being we define
documents as some subset (as yet undetermined) of public, visual,
physically-embodied, representing artifacts.
As a physical artifact, a document is characterizable in terms of
its objects, properties and relations (its aspects). Different facets
of documents can be identified by focusing on (abstracting away) only
certain subsets of the full set of aspects. We have identified and
have been analyzing two such clusters and the relationships between
them. These two clusters, which comprise two relatively decomposable
facets of documents, we call the figural and the textual.
The figural facet of a document refers to the purely visual
objects, properties, and relations of which it is composed. We have
been exploring the figural facet, identifying such notions as figure,
ground, surface, and region. This is, roughly, the analogue of the
distinction between phonetics and phonology in linguistics: we have
been developing a visual phonetics to serve as the basis for various
visual phonologies. Such an enterprise has not been of interest to
traditional linguistics because of its concern for the spoken language
as primary and its assumption that written forms are ``direct
transcriptions'' of the spoken.
The textual facet of a document refers to those objects,
properties, and relations that are encoded via the alphabet, plus
punctuation and spacing. Textual aspects include character and word
identity and boundaries, and ``text categories'' such as sentence and
paragraph. Explicitly marked categories like the paragraph, sentence,
and parenthetical either do not exist in the spoken language, or exist
there only implicitly; in the written language, however, they are
structurally no less ``real'' than other categories of grammar, such
as phrases defined over the syntactic properties of their lexical
heads.
We are currently developing the apparatus for a grammatical
description of the distribution of text categories. It is already
clear that the grammar will have to avail itself of several different
sorts of rules: a set of (perhaps context-sensitive) phrase structure
rules that generate text structures, as well as several levels of
``presentation rules'' that determine how text structures will be
realized as figures or visual objects in a particular environment; the
latter, it turns out, must be ordered much like the rules of
phonology. In addition, we are beginning to draw out the interpretive
rules associated with particular formal delimiters.
GRAMMATICAL THEORY AND DISCOURSE STRUCTURE (GTDS)
Joan Bresnan and Annie Zaenen
Project Participants: Khalid Abd-rabbo, Farrell Ackerman, Joan Bresnan
(Project Leader), Young-Mee Cho, Carolyn
Coleman, Christopher Culy, Amy Dahlstrom, Mary
Dalrymple, Keith Denning, Jeffrey Goldberg, Kristin
Hanson, Ki-Sun Hong, Masayo Iida, Sharon
Inkelas, Mark Johnson, Smita Joshi, Jonni
Kanerva, Paul Kiparsky, Will Leben, Marcy
Macken, Sam Mchombo, Lioba Moshi, Catherine
O'Connor, Mariko Saiki, Peter Sells, John
Stonham, Michael Wescoat, Annie Zaenen, Draga Zec
A central goal of this project is to study the interaction between
syntax and other areas of linguistic investigation. Recent work has
been concentrated on the relations holding between the syntax module
and the morphological and discourse modules. While it has often been
pointed out that not all linguistic phenomena can be described in
terms of sentence grammar (in terms of syntax, that is), syntacticians
are often reluctant to widen the scope of their investigations, as
they feel that they have been relatively successful within the
boundaries of syntax proper. Additionally, there is a concern that a
field too broadly defined might lead to accounts that are vague and
ill-defined. As our research has developed, we have found an
integrative approach to be not only feasible but also illuminating.
The general theory that is emerging is not one whose primary concern
has been to reduce all linguistic phenomena to one set of primitives
within a single module of the grammar; like research in a number of
other CSIL projects we are instead attempting to explain the facts as
interactions among a variety of modules in the grammar. This approach
flies in the face of much of current theory in linguistics, for it is
often argued by syntacticians that only very constrained frameworks
can lead to interesting insights. The merit of such research programs
seems to us debatable: concentrating on only one type of ``primitive''
leads one to overlook regularities that do not fit that kind of
representation; a more accommodating framework allows them to be
captured in a more revealing way.
Our starting point has been syntactic theory as exemplified
primarily in Lexical-Functional Grammar. The original proposal aimed
the extension mainly at the discourse level; in practice, interactions
with morphology have also been a point of interest. While our
approach is admittedly cautious, it has the great advantage of
allowing precise accounts of the studied phenomena. In the
organization of the research an important effort was made to include a
substantial number of graduate students. This decision dictated up to
a certain point the concrete shape of the studies undertaken: at least
some of the studies had to take the form of individual papers.
An important unifying theme of this year's research has been the
status and realization of various kinds of pronominal elements.
Pronouns in natural language seem to be a central device in helping
discourse to cohere. Various pronominal forms play various kinds of
functions and appear to reflect the structure of the discourse. As
such, they figure centrally in much of the CSLI research on discourse
and the effect of context quite generally. Following our working
strategy, we are identifying a cluster of properties that pronouns
have across languages. We are figuring out how these systems of
properties interact to predict the complex grammatical structures that
we find. Our research has been wide-ranging, as a quick survey shows:
investigations into the interactions between agreement, anaphora, and
word order in Bantu languages, cross-linguistic investigations on
reflexives, research on Finnish possessives, and control phenomena in
Serbo-Croatian. Below, we summarize the main findings.
The work on Bantu languages centered on a much-debated problem, the
status of object- and subject-markers in Bantu languages: are they
agreement markers, or anaphoric markers (incorporated pronouns)? Both
answers have been put forward in the past. By taking seriously the
interactions among the morphological, syntactic, and discourse
modules, we have been able to give a clearer answer to this question.
Bresnan and Mchombo show that when this question is related to other
characteristics of the language (especially facts of word order, the
discourse functions of TOPIC and FOCUS, and the function of
independent pronouns) an answer can be given that is much more
illuminating than one that would be available if one was restricted to
pure morpho-syntactic facts, such as the verbal morphology and the
presence or absence of full NP subjects and objects (Bresnan and
Mchombo, 1986a). The integrated analysis is worked out for Chichewa
in Bresnan and Mchombo (1986b) and extended to Sesotho in Johnson and
Demuth (1986). Currently, further research is in progress on Kihaya
and Kichaga, as the arguments developed to distinguish between
agreement and anaphora in Chichewa make interesting predictions for
those (and other Bantu) languages.
These results are important not only because they provide fruitful
new ways to distinguish between agreement and anaphora, while
explaining why the two are so closely related, but also because they
provide syntactic criteria for identifying discourse functions (at
least in some languages). In the clear cases, what is learned about
these functions can shed light on other cases, in which the discourse
notions are less clearly reflected in the syntax or the morphology.
The work on reflexives has focused on two aspects, a previously
well-established distinction (between so-called transitive and
intransitive reflexive constructions), and the rather puzzling
extensions of use reflexive morphemes seem to acquire in different
languages. On the first topic Sells, Zaenen and Zec undertook a
cross-linguistic study based on data from English, Finnish, German,
Dutch, Chichewa, Japanese, Serbo-Croatian, and Warlpiri, showing that
a simple dichotomy between transitive and intransitive reflexive
constructions is insufficient (Sells, Zaenen, and Zec, 1985). In fact,
they argue that there are at least three different types of
distinction that have to be made: transitivity versus intransitivity
in the lexicon, synthetic versus analytic realization in constituent
structure, and open- versus closed-predicate readings in the
semantics. The study shows that the relations between lexical
structure, constituent structure, and semantic representation are less
directly predictable (and hence more interesting) than is often
assumed. In particular, it does not seem to be possible to predict
from the morphology what the syntactic or semantic status of a
reflexive will be (as has been assumed in much of the previous work on
this topic): forms that are phonologically part of a word can be
lexically and semantically independent entities, and forms that are
free syntactically can be lexically and semantically ``bound.'' The
data also show that reflexives cannot always be treated as bound
variables in the semantics; the consequences of this result are
elaborated further in Sells (1986) in which a more sophisticated
picture of the representation of reflexives and other anaphora is
given.
Extensions in the use of reflexive constructions are well-known
across the world's languages but ill-understood. While we do not yet
have a good theory about these phenomena, the research so far shows
that it is again important to transcend a narrowly defined syntactic
approach, even for the modest goal of describing the data. At one end
of the spectrum of extended uses of the reflexive, Coleman
(1986) gives a detailed description of the two morphological
reflexives of Kunparlang, showing, among other things, that a variety
of uses of one of the markers do have a unified analysis. At the
other end of the syntactic spectrum, Sells concentrates on elucidating
the notion of logophoricity (a property associated with reflexive-like
pronominal elements) in accounts of some nonclause-bounded uses of
reflexives (Sells, 1985). It is shown that the notion of
logophoricity covers three more primitive notions, largely pragmatic
ones. Iida and Sells analyze the logophoric use of a reflexive
word in Japanese and detail the interplay between pragmatic and
syntactic factors in the appropriate use of it (Sells and Iida, 1986).
It seems likely that these extensions into morphology and
pragmatics correlate with the typology proposed in Sells, Zaenen, and
Zec (1985); however, these predictions have not yet been tested in
detail. A workshop on the topic of reflexivization organized by Sells
and Zaenen for the summer of 1986 intends to go deeper into this
matter and related issues.
Another study that deals with the problems of anaphora is reported
in Zec's paper (1986), where it is proposed that the relation between
an argument of the main clause and the subject of the embedded clause
in sentences such as ``John tried to leave'' should be reduced to an
anaphoric relation. This contrasts with other recent proposals (made
most explicitly in work by Chierchia) that claim there is a regular
relation between syntax and semantics in such sentences: the
infinitival complement is a syntactic VP that denotes a semantic
property. Zec shows that the correlation between the entailments that
Chierchia takes to establish this relation between semantic properties
and syntactic VPs does not hold across all languages: in
Serbo-Croatian the complements of ``try'' can be shown to be (tensed)
full clauses, but the semantic entailments are the same as in English.
In other words, it is shown that certain of the readings such
sentences have may result from the special anaphoric status of an
argument as well as from the (syntactic) lack of an argument, and that
Chierchia's evidence in itself cannot serve to choose unequivocally
between a syntactic or semantic representation of the control
relation.
In work on Finnish, Kanerva (1986) presents another case in
which an element that is part of a word phonologically has to be
looked upon syntactically as independent. Kanerva argues against the
view that the possessive morphemes in Finnish should be analyzed as
clitics and gives persuasive phonological and morphological evidence
that they are suffixes. Their syntactic function is, however, the same
as that of a possessive pronoun in English, which is a fact one would
not necessarily expect on a nonintegrated view.
Some of the work described above has been collected and will be
published as the first volumes of Studies in Grammatical Theory and
Discourse Structure. The first volume is virtually ready to go to
press under the title ``Interactions of Morphology, Syntax and
Discourse,'' edited by M. Iida, S. Wechsler, and D. Zec. A second
volume, to be edited by A. Zaenen, is in preparation.
References
Bresnan, J. and Mchombo, S. 1986a. Grammatical and Anaphoric
Agreement. In Paper from the Parasession on Pragmatics and
Grammatical Theory. Chicago Linguistic Society.
Bresnan, J. and Mchombo, S. 1986b. Topic, Pronoun, and Agreement in
Chichewa. To appear in M. Iida, S. Wechsler, and D. Zec (Eds.),
Studies in Grammatical Theory and Discourse Structure: Interactions of
Morphology, Syntax, and Discourse, Vol. I. CSLI Working Papers, No. 1.
Stanford: CSLI.
Coleman, C. 1986. Reflexive Morphology in Kunparlang: Interctions of
Morphology, Syntax, and Discourse. To appear in M. Iida, S. Wechsler,
and D. Zec (Eds.), Studies in Grammatical Theory and Discourse
Structure: Interactions of Morphology, Syntax, and Discourse, Vol. I.
CSLI Working Papers, No. 1. Stanford: CSLI.
Kanerva, J. 1986. Morphological Integrity and Syntax: The Evidence
from Finnish Possessive Suffixes. To appear in M. Iida, S. Wechsler,
and D. Zec (Eds.), Studies in Grammatical Theory and Discourse
Structure: Interactions of Morphology, Syntax, and Discourse, Vol. I.
CSLI Working Papers, No. 1. Stanford: CSLI.
Johnson, M. and Demuth, K. 1986. Discourse Functions and Agreement in
the Sotho Languages. Paper presented at the 1986 African Linguistics
Conference, Indiana University, Bloomington.
Sells, P. 1985. The Discourse Representation of Logophoricity.
Presented at the 60th Annual Meeting of the Linguisitc Society of
America, Seattle. To appear as: On the Nature of `Logophoricity' in
A. Zaenen (Ed.), Studies in Grammatical Theory and Discourse
Structure: Logophoricity and Bound Anaphora. Vol. II. CSLI Working
Papers, No. 2. Stanford: CSLI
Sells, P., Zaenen, A., and Zec, D. 1985. Reflexivization Variation:
Relations between Syntax, Semantics, and Lexical Structure. Presented
at the 60th Annual Meeting of the Linguistic Society of America,
Seattle. To appear in M. Iida, S. Wechsler, and D. Zec
(Eds.), Studies in Grammatical Theory and Discourse Structure:
Interactions of Morphology, Syntax, and Discourse, Vol. I. CSLI
Working Papers, No. 1. Stanford: CSLI.
Sells, P. 1986. Coreference and Bound Anaphora: A Restatement of the
Facts. Presented at the 16th Annual Meeting of the North Eastern
Linguistics Society, McGill University, Montreal. To appear in S.
Berman, J. Choe, and M. McDonough (Eds.), Proceedings of NELS-16.
Amherst: GLSA.
Sells, P. and Iida, M. 1986. Discourse Factors in the Binding of
zibun. Presented at the Workshop on Japanese Linguistics, CSLI. To
appear in the proceedings.
Zec, D. 1986. On the Obligatory Control in Clausal Complements. In
Proceedings of the First Eastern States Conference on Linguistics.
-----------
end of part 5 of 7
-------
∂24-Jun-86 2230 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 6
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 22:30:16 PDT
Date: Tue 24 Jun 86 15:35:28-PDT
From:
Subject: CSLI Monthly, Vol. 1, No. 4, part 6
To: newsreaders@SU-CSLI.ARPA
AFT LEXICAL REPRESENTATION THEORY Julius Moravcsik
Project Participants: Colleen Crangle, Ann Gardner, Julius
Moravcsik (Project Leader), Stephen Neale,
Ivar Tonisson
AFT is a theory of word-meaning whose distinguishing claims are:
o Meaning only partially determines extension.
o Meaning is divided into four components: constituency,
structure, function, and agency.
o Meanings are attached to words in the process of
explaining what something labelled by a word is.
The key intuitive idea underlying AFT is that humans are
theory-constructing animals. The meaning of a word like ``emergency''
is the combination of factors one would refer to in the course of
explaining what an emergency is. The four components posited by AFT
permit the statement of generalizations that one could not state in
semantic theories that operate only with the notions of synonymy and
homonymy.
Filling in the four components of the meaning structure and filling in
information that will enable us to fix reference involves reasoning
other than the merely deductive variety, so AFT is linked to the
exploration of nonmonotonic reasoning. Likewise, the fourfold
factorization of meaning allows the gradual specification of some
meaning factors, and hence allows for ``change in meaning'' in the
traditional sense of that notion. This is similar to recent ideas of
Winograd's about how context ``creates new meanings.''
The group established the following four goals:
1. To clarify the relation of AFT to compositional semantics
2. To explain how AFT is related to theories of understanding
and mental representation
3. To explore ways in which AFT provides a semantic lexicon
that can be an input to syntax
4. To pinpoint the empirical facts for which AFT seems to
give more satisfactory explanations than alternative
theories
The group considered the relationship between AFT and work within
several other syntactic and semantic frameworks, including procedural
semantics and Lexical-Functional Grammar. They concluded that the
lexicon contains two distinct components---a semantic component and a
syntactic component. As of now there is no clear and systematic way
of relating these, and thematic relations cannot be given a clear and
semantic equivalent in logical semantics of either a standard or a
nonstandard variety. If AFT can give a clear and well-justified
presentation of semantic verb argument-structure, this should be of
use as an input to the determination of thematic relations.
Visiting speakers included Joseph Almog of UCLA, Nathan Salmon of
UCSB, and Scott Soames of Princeton. Almog and Salmon gave convincing
evidence for the claim that with respect to natural kind terms no
purely qualitative specifications can give necessary and sufficient
conditions of application for terms of this sort. One possible
conclusion one can draw from this is that these terms function like
proper names, as ``rigid designators.'' On the other hand, one could
conclude, in line with one of the AFT premises, that these terms have
meanings that do not fully determine extension. We discussed various
reasons for preferring the second conclusion.
Soames discussed semantic competence. His view is that the
specifications of content by theories of semantics should not be taken
as necessarily describing elements and structures that play key roles
in the psychological processing. We agreed that this applies to AFT;
while the brain or mind presumably does not contain labels
corresponding to the elements that AFT singles out, they could well
have psychological reality, in some interesting sense.
We also compared the work of David Dowty and of Dorit Abusch to the
verb and aspect semantics of Dov Gabbay and Moravcsik. They found the
basic semantic categorization of Dowty's system, which was arrived at
on syntactic grounds, to be the same as that of the Gabbay/Moravcsik
system, which was arrived at on semantic grounds. In comparing the
AFT theory with Abusch's suggestions in terms of empirical
predictions, we concluded that the major problem for AFT is the
inclusion of causality into the semantic analysis.
VISUAL COMMUNICATION
Alexander Pentland
Project Participants: Alex Pentland, Fred Lakin
(many others attended the project's weekly seminars)
The main goal of the Visual Communication project is to discover
the primitive perceptual and design elements of visual media (``visual
morphemes''), and to use them to build computer tools for visual
communication. Our long-term activities fall into three areas: (1)
developing an understanding of how people reason about, discuss, and
perceive visual situations, (2) applying this understanding to develop
representations capable of supporting natural description, concise
reasoning, and perceptual attunement over broad ranges of visual
situations, and (3) using the resulting representations in the
construction of computer systems for augmenting both visual and
natural language communication.
To date we have concentrated on two domains that make intensive use
of visual communication: (1) designing three-dimensional forms and (2)
group ``blackboard'' activity, such as spontaneously occurs whenever
groups of people attempt to, e.g., organize a research effort or
design a computer system. We feel that these two domains cover a wide
range of the interesting theoretical problems, and are also the most
potentially valuable application areas.
Designing Three-Dimensional Forms
Natural, efficient communication depends upon shared
representations. Current 3-D graphics systems, however, use
representations that are quite distant from those people use. The
result is that construction of 3-D models is much like programming:
meticulous translation from the persons' internal representation to
the machines' representation. For instance, engineers typically
sketch a new part using paper and pencil, and then give the sketch to
a draftsman who uses a Computer Aided Design (CAD) system to complete
the detailed specification of the model.
The use of paper for sketches and computers for final models is bad
for exactly the same reasons that the use of paper for final models is
bad: lack of flexibility in the medium, unneeded duplication of
effort, no library of previous drawings, and so forth. Our idea, then,
was to develop a tool that allows the user to very quickly build or
modify a 3-D model; i.e., to replace the pencil and paper. A user would
directly sketch a 3-D form on the computer, playing with the shape until
it looks right, rather than approaching the modeling task as one of
entering a carefully predefined model into the computer.
We wanted, therefore, a tool that is not specialized to any one
application domain but, like pencil and paper, is equally applicable
to any 3-D modeling task. And further, like pencil and paper, we want
this modeling tool to be generally available: i.e., cheap enough to
sit one on everyones' desk, so that they will actually use it.
We have implemented our first approximation of a solution to these
desiderata in a system called SuperSketch (named for ``sketching'' and
``superquadrics''), which provides an environment for interactively
sketching and rendering 3-D models. The specific major design
criteria for SuperSketch are: (1) a representation that closely
matches the way people naively think about and discuss shape, (2)
effortless interaction approaching that of pencil and paper, and (3)
interactive, ``real-time'' feedback using a Motorola 68020-class
machine without additional hardware.
The representation we have developed describes scene structure in a
manner that is like our naive perceptual notion of ``a part,'' and
allows qualitative description of complex surfaces by means of
physically- and psychologically-meaningful statistical abstractions
(Pentland 1984a, 1986a). The representational system that combines
the fractal functions (Mandelbrot, 1982; Pentland, 1984b), for use in
describing 3-D texture, with superquadric functions (defined below)
for describing the form or shape in a concise and natural manner.
To elaborate, the idea behind this representational system is to
provide a vocabulary of shapes and transformations that will allow us
to model an object world as the relatively simple composition of
component ``parts,'' in much the manner as people seem to do
(Beiderman, 1985; Pentland, 1986a). The most primitive notion in this
represention may be thought of as analogous to a ``lump of clay,'' a
modeling primitive that may be deformed and shaped, but which is
intended to correspond roughly to our naive perceptual notion of ``a
part.'' For this basic modeling element we use a parameterized family
of shapes known as a superquadrics (Barr, 1981). This family of functions
includes cubes, cylinders, spheres, diamonds, and pyramidal shapes as
well as the round-edged shapes intermediate between these standard
shapes. Superquadrics are, therefore, a superset of the modeling
primitives currently in common use.
These basic ``lumps of clay'' (with various symmetries and
profiles) are used as prototypes that are then deformed by stretching,
bending, twisting, or tapering, and then combined using Boolean
operations to form new, complex prototypes that may, recursively,
again be subjected to deformation and Boolean combination. As an
example, the back of a chair is a rounded-edge cube that has been
flattened along one axis, and then bent somewhat to accommodate the
rounded human form. The bottom of the chair is a similar object, but
rotated 90 degrees, and by ``oring'' these two parts together with
elongated rectangular primitives describing the chair legs, we obtain a
complete description of the chair.
Interestingly, we have found that when adult human subjects are
required to describe imagery verbally with completely novel content,
their typical spontaneous strategy is to employ a descriptive system
analogous to this one (Hobbs, 1985). Thus it appears that this
representation may be able to provide considerable insight into the
structure of people's verbal descriptions of shape.
Perhaps most importantly, however, we have discovered (and been
able to both mathematically prove and practically demonstrate) that
the primitive elements of this representation have a unique property
that allows us to *directly recognize* them in the information in the
retinal array, using only very simple mathematical operations
(Pentland, 1986b). Further, this recognition is overconstrained: the
wealth of information in the image array allows ``reliable''recovery of
these basic representational elements. That is, the elements of this
representation have a unique regularity that allows any properly
attuned mechanism to ``reliably'' infer their 3-D shape and
arraingement. Thus descriptions formed in this 3-D shape
representation may be firmly grounded on the facts of the physical
world.
In sum, we have implemented SuperSketch on a Symbolics 3600, and
found that we were able to provide the user with adequate feedback by
devising a new, linear-time hidden line algorithm that allows
real-time display of two engineering views of the scene without need
for special hardware.
We have been able to demonstrate that this representational system
is able to accurately describe a very wide range of natural and
man-made forms in an extremely simple, and therefore useful, manner.
Further, we have found that descriptions couched in this
representation are similar to people's (naive) verbal descriptions and
appear to match people's (naive) perceptual notion of ``a part.'' And
finally, we have shown that descriptions framed in the representation
have markedly facilitated man-machine communication about both natural
and man-made 3-D structures. It appears, therefore, that this
representation gives us the right ``control knobs'' for discussing and
manipulating 3-D forms.
It is clear, however, that the representational framework developed
so far is not complete. It appears that additional modeling
primitives, such as branching structures or particle systems, will be
required to model the way people think about objects such as trees,
hair, fire, or river rapids. Our future work will involve the
integration of these primitives, together with time and motion
primitives, into the framework that we have presented here.
Blackboard Activity
Group conversational graphics, such as occurs when groups get
together to organize a project or design a software product, involves
a public image knowingly utilized by a communicating working group.
Such group discussion and communication is a critical, sometimes
time-consuming phase of the design process, and to date has been
almost completely immune to any sort of technological improvement:
whiteboards are the state of the art.
This particular kind of activity, however, has certain
characteristics which seem to make it an excellent domain from the
standpoint of research into computer-aided text-graphic dialogs (Lakin,
1986). Some of these characteristics are:
o Agility: a challenge for interface and representation that will help
hone our notions of what constitutes the critical variables in
man-machine communication.
o Explicitness: the group would like to have a `complete' record on
the external display, including ``history'' and ``alternate
development'' editing capabilities, that seem to demand computer
enhancement.
o Visual languages: formal, special purpose visual languages are often a
component of group graphics; we have found that these formal languages
can be amenable to automatic interpretation.
Our goal is to make computers understand and assist such
blackboard-like text-graphic dialogs. We began with an analysis of
three specific visual languages used in conversational graphics: DAGS
(directed acyclic graph notation used by some linguists), SIBTRAN
(graphic devices for organizing textual sentence fragments), and the
Visual Grammar Notation---the notation in which the other grammars are
written. We first analyzed the computer parsing of these languages,
i.e., how the computer recovers their underlying syntactic structure.
Once a phrase in a particular visual language has been identified and
parsed, we are left with a higher level representation of the visual
phrase, a representation that we then use to support the communicative
activity. For the visual languages addressed to date, appropriate
action includes: (1) compilation into an internal form representing
the semantics of the phrase, of objects, (2) translation into another
text-graphic language, or (3) assistance for agile manual
manipulation.
The research accomplished to date combines computer graphics,
symbolic computation, and textual linguistics to accomplish ``spatial
parsing'' for such visual languages. (Previous work has parsed
diagrammatic images, which are two-dimensional mathematical
expressions, using a grammar which was visually notated; however, the
expression and the grammar were input by hand.)
We have implemented, on a Symbolics 3600, a text-graphic parser
that utilizes context-free grammars which are both visual and
machine-readable. The parser takes two inputs: a region of image
space and a visual grammar. The parser employs the grammar in
recovering the structure for the graphic communication object lying
within the region.
We have shown how to write grammars using the Visual Grammar
Notation and have written grammars for the three languages mentioned
above as In addition, parsers and interpreters have been written for
all three languages.
References
Barr, A. 1981. Superquadrics and angle-preserving
transformations. IEEE Computer Graphics and Application (1):1-20.
Beiderman, I. 1985. Human image understanding: recent research and
a theory. Computer Vision, Graphics and Image Processing, Vol 32,
No. 1, pp. 29-73.
Hobbs, J. 1985. Final Report on Commonsense Summer. SRI Artificial
Intelligence Center Technical Note 370.
Lakin, F. 1986. Spatial Parsing for Visual Languages. To
appear in S-K. Chang (Ed.), Visual Languages. New York: Plenum Press.
Mandelbrot, B. 1982. The Fractal Geometry of Nature. San Francisco:
Freeman and Sons.
Pentland, A. 1984a. Perception Of Three-Dimensional Textures.
Investigative Opthomology and Visual Science, (25)3:201.
Pentland A. 1984b. Fractal-Based Description Of Natural Scenes.
IEEE Transactions on Pattern Analysis and Machine Recognition,
(6)6:661-674.
Pentland A. 1986a. On Perceiving 3-D Shape and Texture. Presented at
Symposium on Computational Models in Human Vision, Center for Visual
Science, University of Rochester, June 19-21.
Pentland A. 1986b, Perceptual Organization and the Representation
of Natural Scenes. AI Journal, (28)2:1-39.
-----------
end of part 6 of 7
-------
∂24-Jun-86 2326 JAMIE@SU-CSLI.ARPA CSLI Monthly, Vol. 1, No. 4, part 7
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 24 Jun 86 23:26:51 PDT
Date: Tue 24 Jun 86 15:36:28-PDT
From:
Subject: CSLI Monthly, Vol. 1, No. 4, part 7
To: newsreaders@SU-CSLI.ARPA
---------------------
JOHN PERRY'S INAUGURAL LECTURE FOR THE HENRY WALDGRAVE STUART CHAIR
As of this academic year, John Perry was appointed Henry Waldgrave
Stuart Professor of Philosophy. The inaugural lecture, ``Meaning and
the Self,'' was held on the evening of May 23. About 150 people
attended the lecture, including Professor Keith Donnellan, Perry's
dissertation advisor when he was a student at Cornell. Donnellan
introduced Perry.
The lecture was about the concept of the self, and various
philosophical approaches to it, especially those of Hume and Kant.
Hume looked for the self but found nothing there. Kant thought it
must be an essential ingredient of almost all our thoughts,
perceptions, and actions. Perry shows how current work on the theory
of meaning suggests a resolution of these seemingly irreconcilable
positions.
A reception was held in Tanner Library following the lecture, where
discussion of the theme of the talk mixed with good food and wine.
The event was a highly memorable one for all concerned.
---------------------
CSLI POSTDOCTORAL FELLOWS
PETER SELLS
Sells received his PhD in Linguistics from the University of
Massachusetts in the summer of 1984, and came directly to CSLI as a
postdoctoral fellow.
He has mainly worked on anaphora, in particular, investigating the
interaction between syntactic and semantic or discourse-based
information; parts of this research appeared as ``Restrictive and
Non-Restrictive Modification'' (CSLI Report No. 28), and ``Coreference
and Bound Anaphora: A Restatement of the Facts'' (to appear in the
Proceedings of the 16th Meeting of the North-Eastern Linguistics
Society). More recently he has devoted his time to a study of the
phenomenon of ``logophoricity,'' through which pronouns are used in
contexts of indirect or secondary discourse; he will shortly be
finishing a paper on this topic.
Sells has worked with other researchers at CSLI, producing a paper
entitled ``Reflexivization Variation'' with Annie Zaenen and Draga
Zec, a cross-linguistic study of reflexive constructions; this will
appear in a collection of CSLI working papers on grammatical theory.
He also presented a paper with Masayo Iida at the Japanese Workshop in
March 1986 (entitled ``Discourse Factors in the Binding of `zibun' ''),
which will appear in the Proceedings.
In the fall of 1984 Sells cotaught two classes at Stanford, one on
Government-Binding Theory with Edit Doron, and one on Generalized
Phrase Structure Grammar with Gerald Gazdar and Ivan Sag. In the
spring of 1985 he gave a series of lectures at the University of
California, Santa Cruz, which were written up as a book in the summer
of 1985 under the title `Lectures on Contemporary Syntactic Theories',
CSLI Lecture Notes No. 3.
He spent the academic year 1985--86 on leave from CSLI, taking
visiting teaching positions at the Departments of Linguistics at
Stanford and the University of Texas at Austin, giving courses on
syntactic theory and anaphora. He plans to continue his study of
logophoricity in his second full year at CSLI, and to begin work on
the syntax, semantics, and computational implementation of various
ellipsis constructions in English.
Sells has clear views on the potential impact of CSLI on linguistics:
``I've often been asked if I think that CSLI will accomplish what the
original proposal suggested might be possible. I don't have a good
view of all of CSLI-related activity, but as far as the linguistics
part of things goes, I feel fairly confident that there will be a
large impact on the field. What we've discovered over the past two
years is that it will take a long time, but in ten years we'll be able
to look back to events that happened here which together put the field
in a somewhat different position. I suspect that the big boom in
machine translation projects in the 60's must have been similar to our
present circumstances; all of a sudden people saw how to put
theoretical and practical knowledge together, and everybody started
doing it. Then they realized that there was a lot about language that
they didn't know. I think we may be in the same position now, but
we're reaching our present limits at a much higher plateau.
The idea of doing linguistics with an emphasis on information
structures (of various kinds) *is* different, at least to my mind,
and we're beginning to ask all kinds of questions we never asked
before. This may not be particular to CSLI, but the thing about CSLI
is that here the research activity is much more focussed, and we have
such a great environment to work in---I mean both CSLI and Stanford.
Stanford is for me the kind of place where you can walk around and
feel in what you see and in the air that some serious work is being
done.''
---------------------
CSLI SNAPSHOTS: MARTHA POLLACK
Martha Pollack has just received her PhD in Computer and
Information Science from the University of Pennsylvania. She
completed her dissertation, entitled ``Inferring Domain Plans in
Question-Answering,'' as an employee of SRI International and as an
active participant in two CSLI projects: Rational Agency; and
Discourse, Intention, and Action. She is to be congratulated for
receiving Penn's Morris and Dorothy Rubinoff Award, awarded for a PhD
dissertation that has resulted in, or could lead to, innovative
applications of computer technology.
Pollack says that it was ``my longstanding interdisciplinary bent
that got me to SRI and CSLI.'' This ``bent'' began as an
undergraduate at Dartmouth College, where she originally considered a
double major in mathematics and anthropology, but gave that up to
design a special major called ``Linguistics''! (``Had I been more
creative,'' she notes, ``I might have called it something like
`Information Structures,' or even `Perspectives on the Study of
Language and Information.' '') She completed coursework in
mathematics, computer science, philosophy of language, and
anthropological linguistics, and then, since Dartmouth has no
linguistics department, spent a semester at MIT and Harvard University
studying syntax and semantics. She extended her study of linguistics
for a brief period at Stanford where she worked with Tom Wasow and
Ivan Sag. She then spent 2 years teaching computer programming in
industry before deciding to continue her graduate studies at Penn,
working with Aravind Joshi and Bonnie Webber. Pollack chose Penn
largely because of its active cognitive science group.
At Penn, she met Barbara Grosz, who was there visiting for a
semester. Grosz invited her to spend a summer at SRI's AI
Center---fortuitously it was the summer that CSLI came into existence.
For the next two years, Pollack commuted periodically from
Philadelphia to California to talk with Grosz, as well as with other
CSLI folks pursuing research in areas related to her own, such as Ray
Perrault, Phil Cohen, and Michael Bratman. Since joining SRI last
September, her cross-country commutes have been replaced by much
shorter trips between SRI and Ventura: she's bought a moped for this
purpose, and toys with the idea of moving up to a motor-scooter.
Pollack's current research interests reflect her participation in
the Rational Agency (RatAg) and Discourse, Intention, and Action (DIA)
groups. On the one hand, she is interested in continuing the study of
the principles of rational behavior and the design of systems that
embody those principles. On the other, she is concerned with the
methods in natural languages for conveying intentions. She sees a
synergy between these two lines of research in that some of the most
strenuous demands on a theory of rational behavior seem to arise from
the analysis of communicative behavior, while detailed analysis of
communicative behavior ultimately supposes a theory of intentions and
of rational behavior in the large. Her perspective on this research
is clearly that of a computer scientist: she wants to develop
artificial systems that exhibit rational behavior, including rational
communicative behavior. She notes that ``CSLI has not made me a
philosopher or a linguist: what it has done is enabled me to become a
better informed user of philosophical and linguistic theories.''
---------------------
CSLI PUBLICATIONS
The following reports have recently been published. They may be
obtained by writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford,
CA 94305 or publications@SU-CSLI.
CSLI-86-43. On Some Formal Properties of Metarules
by Hans Uszkoreit and Stanley Peters
48. A Compilation of Papers on Unification-Based Grammar
Formalisms, Parts I and II
by Stuart M. Shieber, Fernando C. N. Pereira, Lauri
Karttunen, and Martin Kay
49. An Algorithm for Generating Quantifier Scopings
by Jerry R. Hobbs and Stuart M. Shieber
50. Verbs of Change, Causation and Time
by Dorit Abusch
-------------------------------------------------------------------------
Editor's Note:
The next issue of the Monthly will be the October issue.
-------------------------------------------------------------------------
--Elizabeth Macken
Editor
end of part 7 of 7
-------
∂14-Jul-86 0947 EMMA@CSLI.STANFORD.EDU [Richard Waldinger <WALDINGER@SRI-AI.ARPA>: talk: program transformation, tuesday]
Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 14 Jul 86 09:47:26 PDT
Date: Mon 14 Jul 86 09:09:53-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: [Richard Waldinger <WALDINGER@SRI-AI.ARPA>: talk: program transformation, tuesday]
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
Return-Path: <WALDINGER@SRI-AI.ARPA>
Received: from SRI-AI.ARPA by SU-CSLI.ARPA with TCP; Fri 11 Jul 86 17:11:02-PDT
Date: Fri 11 Jul 86 17:10:47-PDT
From: Richard Waldinger <WALDINGER@SRI-AI.ARPA>
Subject: talk: program transformation, tuesday
To: aic.associates@SRI-WARBUCKS.ARPA, planlunch@SRI-WARBUCKS.ARPA,
csl.distribution@SRI-WARBUCKS.ARPA, friends@SU-CSLI.ARPA
Title: Efficient Compilation of Linear Recursive Functions
into Object-Level Loops
Speaker: Hessam Khoshnevisan,
Imperial College, London
Time: Tuesday, 15 July, 4:15pm
Place: New AIC Conference Room, EJ228
Building E, SRI (Visitors from outside SRI please
come to the reception 5 minutes early)
Coffee: 3:45pm in Waldinger office
Abstract: In the following message:
11-Jul-86 10:36:11-PDT,2105;000000000001
Return-Path: <@SRI-IU.ARPA,@sri-freebie.ARPA:ichiki@sri-freebie>
Received: from SRI-IU.ARPA by SRI-AI.ARPA with TCP; Fri 11 Jul 86 10:36:02-PDT
Received: from sri-freebie.ARPA by SRI-IU via SMTP with TCP; Fri,
11 Jul 86 10:35:54-PDT
Received: by sri-freebie.ARPA (1.1/SMI-2.0) id AA08713; Fri,
11 Jul 86 10:33:39 PDT
Date: Fri 11 Jul 86 10:33:28-PDT
From: Joani Ichiki <ICHIKI@SRI-FREEBIE.ARPA>
Subject: Khoshnevisan abstract
To: Waldinger@SRI-AI.ARPA
Message-Id: <SUN-MM(193)+TOPSLIB(120) 11-Jul-86 10:33:28.SRI-FREEBIE.ARPA>
Reply-To: Ichiki@sri-iu
Richard--Here it is:
EFFICIENT COMPILATION OF LINEAR RECURSIVE
FUNCTIONS INTO OBJECT LEVEL LOOPS
Hessam Khoshnevisan
Department of Computing, Imperial College, London
ABSTRACT
While widely recognized as an excellent means for solving
problems and for designing software, functional programming languages
have suffered from their inefficient implementations on conventional
computers. A route to improved run-time performance is to transform
recursively defined functions into programs which execute more quickly
and/or consume less space. We derive equivalent imperative
programming language loops for a large class of LINEAR recursive
functions of which the tail-recursive functions form a very small
subset. We first identify a small set of primitive function defining
expressions for which we determine the corresponding loop-expressions.
We then determine the loop-expressions for linear functions defined by
any expressions which are formed from those primitives. In this way,
a very general class of linear functions can be transformed
automatically into loops in the parsing phase of a compiler, since the
parser has in any case to determine the hierarchitecal structure of
function definitions. Further transformation may involve specific
properties of particular defining expressions, and adopt previous
schemes. In addition, equivalent linear functions can be found for
many non-linear ones which can therefore also be transformed into
loops.
-------
-------
-------
∂18-Aug-86 1322 EMMA@CSLI.STANFORD.EDU [coraki!pratt@Sun.COM (Vaughan Pratt): Seminar: Wu Wen-tsun, "Mechanization of Geometry"]
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 18 Aug 86 13:22:22 PDT
Date: Mon 18 Aug 86 12:26:12-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: [coraki!pratt@Sun.COM (Vaughan Pratt): Seminar: Wu Wen-tsun, "Mechanization of Geometry"]
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
Return-Path: <coraki!pratt@Sun.COM>
Received: from sun.com by CSLI.STANFORD.EDU with TCP; Mon 18 Aug 86 12:08:06-PDT
Received: from sun.uucp by sun.com (3.2/SMI-3.0)
id AA09562; Mon, 18 Aug 86 12:03:00 PDT
Received: by sun.uucp (1.1/SMI-3.0)
id AA06264; Mon, 18 Aug 86 11:38:36 PDT
Received: by coraki.uucp (3.2/SMI-1.2)
id AA01155; Mon, 18 Aug 86 11:25:38 PDT
Date: Mon, 18 Aug 86 11:25:38 PDT
From: coraki!pratt@Sun.COM (Vaughan Pratt)
Message-Id: <8608181825.AA01155@coraki.uucp>
To: aflb.all@su-score.arpa, friends@su-csli.arpa, logmtc@su-ai.arpa
Subject: Seminar: Wu Wen-tsun, "Mechanization of Geometry"
SPEAKER Professor Wu Wen-tsun
TITLE Mechanization of Geometry
DATE Thursday, August 21
TIME 2:00 pm
PLACE Margaret Jacks Hall, room 352
ABSTRACT
A mechanical method of geometry based on Ritt's characteristic set
theory will be described which has a variety of applications including
mechanical geometry theorem proving in particular. The method has been
implemented on computers by several researchers and turns out to be
efficient for many applications.
BACKGROUND
Professor Wu received his doctorate in France in the 1950's, and was a
member of the Bourbaki group. In the first National Science and
Technology Awards in China in 1956, Professor Wu was one of three
people awarded a first prize for their contributions to science and
technology. He is currently the president of the Chinese Mathematical
Society.
In 1977, Wu extended classical algebraic geometry work of Ritt to an
algorithm for proving theorems of elementary geometry. The method has
recently become well-known in the Automated Theorem Proving community;
at the University of Texas it has been applied it to the machine proof
of more than 300 theorems of Euclidean and non-Euclidean geometry.
-------
∂01-Oct-86 1818 EMMA@CSLI.STANFORD.EDU Calendar, October 2, No. 1
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 1 Oct 86 18:18:03 PDT
Date: Wed 1 Oct 86 17:14:06-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: Calendar, October 2, No. 1
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 2, 1986 Stanford Vol. 2, No. 1
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, October 9, 1986
12 noon TINLunch
Ventura Hall Reading: "Meditations on a Hobby Horse or the
Conference Room Roots of Artistic Form," by E. H. Gombrich
Discussion led by Geoff Nunberg (Nunberg.pa@xerox)
2:15 p.m. CSLI Seminar
Ventura Hall Situations and Semantic Paradox
Trailer Classroom John Etchemendy and Jon Barwise (barwise@csli)
3:30 p.m. Tea
Ventura Hall
--------------
ANNOUNCEMENT
Thursday activities will be similar to last year's activities.
TINLunches will continue. Each week a member of CSLI will lead a
lunchtime discussion on a paper which will be available ahead of time
at the front desk of Ventura Hall. You may bring a bag lunch, or, if
you arrive early, lunch may be bought at Ventura Hall. Thursday
seminars will be given by the research groups at 2:15 every Thursday.
However, no regular colloquia are planned for autumn and winter
quarters. Special colloquia will be announced from time to time.
The first CSLI Monthly of the new academic year comes out on October
16.
--------------
NEXT WEEK'S TINLUNCH
Reading is E. H. Gombrich's essay
Meditations on a Hobby Horse or the Roots of Artistic Form
Discussion led by Geoff Nunberg
October 9, 1986
This is a classic paper in art criticism in which E. H. Gombric
formulates certain basic questions about the nature of representation,
in terms that are surprisingly relevant to a number of strands in
current CSLI research. He takes as his occasion a child's hobby
horse--a broomstick with a crudely carved head--and asks after its
relation to horses and horsehood. In his words: "How should we address
it? Should we describe it as an `image of a horse'?...A portrayal of
a horse? Surely not. A substitute for a horse? That it is." He goes
on to suggest that the "substitute" relation, which depends more on
functional than on formal similarities, that underlies representation
in general.
--------------
NEXT WEEK'S SEMINAR
Situations and Semantic Paradox
John Etchemendy and Jon Barwise
October 9, 1986
This seminar will be about the Liar paradox and its implications for
the foundations of semantics. It is based on our recently completed
book, "The Liar: an essay on truth and circularity."
-------
∂03-Oct-86 0906 EMMA@CSLI.STANFORD.EDU Late Newsletter Entry
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 3 Oct 86 09:06:04 PDT
Date: Fri 3 Oct 86 08:16:59-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: Late Newsletter Entry
To: friends@CSLI.STANFORD.EDU
Reply-To: dlevy.pa@xerox.com
Tel: (415) 723-3561
Return-Path: <dlevy.pa@Xerox.COM>
Received: from Xerox.COM by CSLI.STANFORD.EDU with TCP; Thu 2 Oct 86 17:42:25-PDT
Received: from Cabernet.ms by ArpaGateway.ms ; 02 OCT 86 17:43:51 PDT
Date: 2 Oct 86 17:43 PDT
From: dlevy.pa@Xerox.COM
Subject: Late newsletter entry
To: emma@CSLI.STANFORD.EDU
cc: dlevy.pa@Xerox.COM
Message-ID: <861002-174351-1656@Xerox>
Emma:
Could you send this out through the usual channels as a late newsletter
entry?
Thanks,
David
Reading and Discussion Group on Figural Representation
Organizers: David Levy, Geoff Nunberg
First meeting: Thursday, October 9 at 10 AM, Ventura Hall
We are forming a reading and discussion group to explore the nature of
figural (roughly speaking, visual) representation. Systems of figural
representation include writing systems, systems of musical notation,
screen "icons," bar graphs, architectural renderings, maps, and so
forth. This topic lies at the intersection of various concerns relevant
to a number of us at CSLI, at Xerox PARC, and at SRI -- theoretical
concerns about the nature of language and representation and their
manifestation in the building of systems and the design of visual
notations for formal languages. There is currently no well-motivated
framework for discussing such material, no map on which to locate
important terms such as "document," "text," "icon," and "format." But
there is clearly a coherent subject matter here waiting to be explored.
Topics we want to look at in early meetings include:
1. Properties of the figural.
2. Figural representation and representation in general.
3. The typology of figural systems.
4. Writing as a figural representation system; distinctive properties
of written language.
5. The technological basis for figural representation (from writing to
print to the computer).
Initially, we plan to organize the discussion around readings drawn from
the literatures of a number of disciplines, among them linguistics,
psychology, literary theory, art criticism, AI, anthropology and
history. We expect to meet once a week (or once every two weeks) at
Ventura Hall (CSLI), starting Thursday morning, October 9, at 10AM.
Please note that we consider this to be a working group, not a general
public forum or a TINLunch.
At our first meeting, we will be discussing a short paper, "Visible
Language," which outlines some of the areas we will be concerned with.
Copies are available at the Ventura Hall desk.
-------
∂08-Oct-86 1854 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 9, No. 2
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 8 Oct 86 18:54:20 PDT
Date: Wed 8 Oct 86 17:43:32-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, October 9, No. 2
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 9, 1986 Stanford Vol. 2, No. 2
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, October 9, 1986
12 noon TINLunch
Ventura Hall Reading: "Meditations on a Hobby Horse or the
Conference Room Roots of Artistic Form," by E. H. Gombrich
Discussion led by Geoff Nunberg
(Nunberg.pa@xerox.com)
2:15 p.m. CSLI Seminar
Ventura Hall Situations and Semantic Paradox
Trailer Classroom John Etchemendy and Jon Barwise
(barwise@csli.stanford.edu)
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, October 16, 1986
12 noon TINLunch
Ventura Hall Reading: To be announced
Conference Room Discussion led by John Perry (John@csli.stanford.edu)
Abstract in next week's calendar
2:15 p.m. CSLI Seminar
Redwood Hall Categorial Unification Grammar
Room G-19 Lauri Karttunen and Hans Uszkoreit
(Lauri@sri-warbucks.arpa)
Abstract in this calendar
3:30 p.m. Tea
Ventura Hall
--------------
NEXT WEEK'S SEMINAR
Categorial Unification Grammar
Lauri Karttunen and Hans Uszkoreit
October 16, 1986
The introduction of unification formalism and new types of rules has
brought about a revival of categorial grammar (CG) as a theory of
natural language syntax. We will survey some of the recent work in
this framework and discuss the relationship of lexical vs. rule-based
theories of syntax.
Non-transformational syntactic theories traditionally come in two
varieties. Context-free phrase structure grammar (PSG) consists of a
very simple lexicon and a separate body of syntactic rules that
express the constraints under which phrases can be composed to form
larger phrases. Classical CG encodes the combinatorial principles
directly in the lexicon and, consequently, needs no separate component
of syntactic rules.
Because a unification-based grammar formalism makes it easy to
encode syntactic information in the lexicon, theories such as LFG and
HPSG, which use feature sets to augment phrase structure rules, can
easily encode syntactic information in the lexicon. Thus syntactic
rules can become simpler and fewer rules are needed. In this respect,
HPSG, for example, is much closer to classical CG than classical PSG.
Pure categorial grammars can also be expressed in the same
unification-based formalism that is now being used for LFG and HPSG.
This includes more complex versions of CG employing the concepts of
functional composition and type raising as they are currently
exploited in the grammars of Steedman, Dowty, and others. The merger
of strategies from categorial grammar and unification grammars
actually resolves some of the known shortcomings of traditional CG
systems and leads to a syntactically more sophisticated grammar model.
--------------
READING AND DISCUSSION GROUP ON FIGURAL REPRESENTATION
Organizers: David Levy, Geoff Nunberg
First meeting: Thursday, October 9 at 10 AM, Ventura Hall
We are forming a reading and discussion group to explore the nature of
figural (roughly speaking, visual) representation. Systems of figural
representation include writing systems, systems of musical notation,
screen "icons," bar graphs, architectural renderings, maps, and so
forth. This topic lies at the intersection of various concerns
relevant to a number of us at CSLI, at Xerox PARC, and at SRI---
theoretical concerns about the nature of language and representation
and their manifestation in the building of systems and the design of
visual notations for formal languages. There is currently no
well-motivated framework for discussing such material, no map on which
to locate important terms such as "document," "text," "icon," and
"format." But there is clearly a coherent subject matter here waiting
to be explored.
Topics we want to look at in early meetings include:
1. Properties of the figural.
2. Figural representation and representation in general.
3. The typology of figural systems.
4. Writing as a figural representation system; distinctive
properties of written language.
5. The technological basis for figural representation (from
writing to print to the computer).
Initially, we plan to organize the discussion around readings drawn
from the literatures of a number of disciplines, among them
linguistics, psychology, literary theory, art criticism, AI,
anthropology and history. We expect to meet once a week (or once
every two weeks) at Ventura Hall (CSLI), starting Thursday morning,
October 9, at 10AM. Please note that we consider this to be a working
group, not a general public forum or a TINLunch.
At our first meeting, we will be discussing a short paper, "Visible
Language," which outlines some of the areas we will be concerned with.
Copies are available at the Ventura Hall desk.
-------
∂15-Oct-86 1753 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 16, No. 3
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 15 Oct 86 17:53:37 PDT
Date: Wed 15 Oct 86 16:59:46-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, October 16, No. 3
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 16, 1986 Stanford Vol. 2, No. 3
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, October 16, 1986
12 noon TINLunch
Ventura Hall Reading: "Possible Worlds and Situations"
Conference Room by Robert Stalnaker
Discussion led by John Perry (John@csli.stanford.edu)
Abstract in this week's calendar
2:15 p.m. CSLI Seminar
Redwood Hall Categorial Unification Grammar
Room G-19 Lauri Karttunen and Hans Uszkoreit
(Lauri@sri-warbucks.arpa)
Abstract in last week's calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, October 23, 1986
12 noon TINLunch
Ventura Hall Reading: "Circumstantial Attitudes and Benevolent
Conference Room Cognition" by John Perry
Discussion led by David Israel
(Israel@csli.stanford.edu)
Abstract in next week's calendar
2:15 p.m. CSLI Seminar
Redwood Hall HPSG Theory and HPSG Research
Room G-19 Ivan Sag (Sag@csli.stanford.edu)
Abstract in this calendar
3:30 p.m. Tea
Ventura Hall
--------------
THIS WEEK'S TINLUNCH
Reading: "Possible Worlds and Situations" by Robert Stalnaker
Discussion led by John Perry
October 16, 1986
Stalnaker (and also Barbara Partee, in a paper I shall mention at
TINLunch), maintains that possible worlds semantics is an extremely
flexible and metaphysically benign (if not completely neutral)
framework. I will argue that this is not so, that possible worlds
semantics, in the form in which Stalnaker (and Partee) embraces it, is
metaphysically loaded in one of two quite different ways, either of
which incorporate assumptions that linguists and AI-researchers
shouldn't thoughtlessly adopt, and which philosophers should
thoughtfully avoid. --John Perry
--------------
NEXT WEEK'S SEMINAR
HPSG Theory and HPSG Research
Ivan Sag
October 23, 1986
This seminar presents an overview of the central ideas under
development by members of the CSLI HPSG project. Head-Driven Phrase
Structure Grammar is an information-based theory of the relation
between syntactic and semantic structure. The syntactic concepts of
HPSG evolved from Generalized Phrase Structure Grammar (GPSG) in the
course of the last few years through extensive interaction with
members of the CSLI FOG project. HPSG integrates key ideas of GPSG
with concepts drawn from Kay's Functional Unification Grammar and
Categorial Grammar and incorporates certain analytic techniques of
Lexical-Functional Grammar. The semantic concepts of HPSG are a hybrid
of Situation Semantics and the theory of thematic roles. Current HPSG
theory embodies a number of important design properties: monotonicity,
declarativeness and reversibility, yet current HPSG analyses require
extensions of such standard frameworks as PATR-II. Current research
ideas will be surveyed, as well as ongoing work on the hierarchical
structure of the HPSG lexicon.
-------
∂16-Oct-86 1734 EMMA@CSLI.STANFORD.EDU CSLI Monthly
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 16 Oct 86 17:34:23 PDT
Date: Thu 16 Oct 86 16:30:27-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
The CSLI Monthly will be sent out some time on Friday and will
be about 33 pages long divided into 8 parts.
Those of you on Turing will not receive the Monthly instead
you can find it in <csli>csli-monthly.10-86.
-------
∂17-Oct-86 1431 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 3
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 14:31:22 PDT
Date: Fri 17 Oct 86 13:48:23-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 3
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 3
RESPONSE
John Perry
Smith's argument is as follows:
1) An important function of natural language is to convey information.
2) Natural language is situated.
Hence, to the extent that situation semantics explains how natural
languages work, its account should apply to the language of situation
theory.
So the language of situation theory, or future developments of it,
should manifest the crucial properties of natural language.
I am a bit vague on how either of the conclusions follow from the
premises. I accept the premises and the first conclusion. The second
conclusion doesn't seem very plausible. Presumably, if statements of
situation theory are to convey information (or misinformation) about
how language works, they must share those properties of natural
language statements that allow them to convey information. But I
would expect there to be many crucial properties of the more natural
parts of natural language that the technical parts need not have. For
example, I don't think it would matter much if there were no agreement
on the pronunciation of the parts of (1) below that are not words of
ordinary English. Perhaps as situation theory's notation evolves, it
will become impossible for anyone but Latex hackers to produce
instances of it. These properties would sharply distinguish the
statements of situation theory from garden variety English statements,
and the latter at least would be a sad development. But I don't think
these differences would point to any significant deficiency of
situation theory as a theory that could be applied to its own
statements.
From the point of view of situation semantics, the most crucial
properties of sentences for communicating information is that they
have efficient meaning, that allows a user to describe a situation
from her situation in the world. For this reason situation semantics
emphasizes the meaning/interpretation distinction. It is crucial to
make the meaning/interpretation distinction with respect to statements
using the vocabulary and notation of situation theory, just as with
other statements. Consider Brian's example:
(1) s1 |= <<Loves, John, Mary; 1>>
This could be used by a great many people to say many different
things: same meaning, different interpretations. Brian might use it
in a class at Xerox PARC, to assert, or bring up for consideration,
that what has happened since 1400 makes it the case that John of
Edlingham loved Mary Queen of Scots, while David Israel might use it
in a class at SRI about a different situation, a different John, and a
different Mary. The meaning is the same in each case, but the
interpretation differs with context. Or it might be used, as it was in
Brian's article as is above, with no John and Mary in mind. There is
meaning, but no interpretation.
Thus the meaning of (1) should be taken to be a relation between
utterances and situations they describe, just as with other English
sentences.
Brian seems to say that the predicate calculus cannot be used to make
assertions, and that sentences in situation theory notation, while
they can be used to make assertions, cannot be used for other speech
acts. But it seems to me that both the predicate calculus and the
language of situation theory can be used to make assertions. Some
examples:
(2) For all x (x=x)
(3) |=<<Involves [x|thing,x;1], [x| =,x,x;1] ;1 >>
Of course, these same sentences could be used for other purposes, but
this goes for other sentences in natural language too, such as "John
loves Mary," which can be used to make assertions, give examples, and
so forth.
The predicate calculus is a formal language while the language of
situation theory is not. The latter is just a part of technical
English, at least at present. (Of course, the predicate calculus, or
an informal notation based on it, gets used this way too.) The
notation of situation theory, then, is not bound by rigid rules but
can be used in novel ways in combination with bits of older parts of
English. I can even ask a question:
(4) |=<<Involves [x|puce,x;1], [x| ugly,x;1] ;1 >> ?
I suspect, however, that by asking this question I will have revealed
not only ignorance about a certain color but also about what exactly
Brian was getting at here.
Brian is most concerned about two differences having to do with what
he calls objectification. This means that "aspects of content" that
in more natural parts of language are contributed by circumstances of
use, or signified by nonnominal constructions, are in situation
theory's notation designated by nominal constructions. I shall ramble
on a bit about these matters.
As to the first point, the more natural parts of language have plenty
of devices for making explicit in language what can be left to the
circumstances to supply -- devices that are used when lack of shared
context or shared understandings threatens. You say, "It's four
o'clock"; I reply, "You mean four o'clock Pacific Coast time -- I'm
calling you from Denver."
There is a long tradition of thinking as Brian seems to, that using
nominal constructions carries a lot of metaphysical weight. That may
be so, but the view that sometimes goes along with this, of trying to
avoid such constructions, seems wrong-headed to me.
Consider a simple language with sentences like "Brian sleeps" and
"Brian stirs" used by some hypothetical folk in the Palo Alto wilds.
There are no nominals standing for times nor even any tenses, we may
suppose. The theorist sees, however, that the truth-values of the
sentences changes in systematic ways with what goes on at various
times. Everyone assents to "Brian sleeps" when used on those
occasions when Brian is sleeping, dissents from it when Brian is
stirring. So he concocts a theory: a use of "Brian is sleeping" at
time t is true iff Brian is sleeping at t. Is there any virtue in the
theorist, having noted the dependence of truth on times, hesitating to
adopt explicit reference to times in his own vocabulary? I must admit
I cannot see it. It is important to see that the theorist is not
thereby saying that the folk in question have the same concept of time
and times he uses in theorizing about their language and behavior.
We won't be clear about this last point, if we confuse the project of
constructing a theory that shows how the informational or other
content of uses of sentences systematically depends on the situation
of use, with the project of producing, in one's own language,
concept-preserving translations of those sentences. It seems to me a
virtue of situation theory that it helps us make this distinction.
Let me abuse another example to try to develop this point. A theorist
who knows about time zones is studying the language of Californians
who do not. His theory is that "It is n o'clock," used by a member of
this group at time t, is true iff at t is is n o'clock Pacific Coast
time. The theorist uses "is n o'clock in" as a two-place predicate of
times and has a supply of names of time zones. He uses this heavy
equipment to state a theory of the use of a language that is innocent
of names of time zones and uses "is n o'clock" as a property of times.
He does not produce of translation of the sentences of the group, but
an account of their conditions of truth. This approach has the
advantage that he can say things like, "The reason they don't need a
concept of a time zone is because they never go on vacations and can't
afford to call long-distance and don't watch national TV, and so the
information about time they pick up in perception is always about
Pacific Coast time, and the actions their knowledge of time controls
are always actions whose appropriateness is determined by Pacific
Coast time--e.g., they go to bed when it is 10 o'clock Pacific Coast
time, which they can find out by looking at their watches, which are
all set to Pacific Coast time."
I am inclined to think that (a) any language, natural or designed,
used by agents to store and convey information that controls their
actions, will rely on aspects of context to contribute to content; (b)
a satisfactory theory of how such languages are used to store and
convey information by the agents that use them will make some of these
aspects explicit, since relationships that merely need to obtain for
things to work, need to be stated to explain how they work. But, as
Brian points out, the theory will itself be used to convey
information, and so will itself rely on aspects of content contributed
by context. There is no reason to expect we can get to a theory that
is itself couched in a language that does not rely on the embedding
situation to supply content.
Basically, although there may be deeper issues involved in the
differences between the statements of situation theory and more garden
variety statements that Brian notes, I don't yet see a problem.
% end of part 3
-------
∂17-Oct-86 1434 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 1:1, part 1
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 14:34:10 PDT
Date: Fri 17 Oct 86 13:45:21-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 1:1, part 1
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
CSLI MONTHLY
-----------------------------------------------------------------------
October 1986 Vol. 2, No. 1
-----------------------------------------------------------------------
A monthly publication of
The Center for the Study of Language and Information
CSLI was founded early in 1983 by researchers from Stanford University,
SRI International, and Xerox PARC to further research and development
of integrated theories of language, information, and computation. CSLI
headquarters and the publication offices are located at the Stanford
site, Ventura Hall, Stanford, CA 94305.
------------------
Contents
E Pluribus Unum?
by Tom Wasow 1
The Wedge 2
Is LOST a Natural Language
by Brian Cantwell Smith 2
Response
by John Perry 3
Project Reports 4
Designing a Situated Language
by Susan U. Stucky 4
Quantifiers, Referring Noun Phrases, and Elliptical Verb
Phrases
by Stanley Peters and Mark Gawron 5
Structure of Written Languages
by Geoffrey Nunberg 6
Summer Meetings of GTDS
by Peter Sells 7
CSLI Site Directors 7
New Postdoctoral Fellows 1986/87 7
CSLI Visiting Scholars 7
Announcement and Call for Papers 7
CSLI Publications 8
Apropos 8
------------------
E PLURIBUS UNUM?
Tom Wasow
How many theories of grammar are under development at CSLI? To the
casual reader of our literature, it must appear that there are a great
many. Indeed, the list of types of grammars used in various CSLI
publications reads like a recipe for alphabet soup: LFG, GPSG, HPSG,
FUG, CG, PATR, D-PATR (a.k.a. HUG), etc. What do these have in
common, other than geographical proximity? Is it reasonable to lump
them together, as some have begun to do, under the heading GBAG
(Generalized Bay Area Grammar)?
The existence of multiple labels does not, by itself, entail the
existence of any deep disagreements, especially in the sort of
environment we have at CSLI. Nowhere else is there so much
collaboration between academic linguists and members of industrial
centers of research. This involves both institutionalized and
informal interactions on issues ranging from the philosophical
foundations of grammatical theory to the efficient implementation of
particular formalisms. Ours is a unique and innovative environment
for computational experimentation with theoretical ideas. Therefore
new approaches can be invented, formalized, and explored especially
rapidly here. Consequently, labels tend to proliferate. This does
not mean that our ideas are any more divergent than the loosely
related ideas that might elsewhere be subsumed under a single
inexplicit theoretical framework.
Before we go on to make substantive comparisons, some points of
clarification are necessary. First, the theories in question are
changing, and it is not always easy to determine what the essential
properties of any theory are. Indeed, at least one theory on the list
above (HPSG, for Head-driven Phrase Structure Grammar) evolved out of
another (GPSG, for Generalized Phrase Structure Grammar); while
interest in GPSG remains strong, it is no longer being actively
developed in any of CSLI's eighteen research projects. Second, it is
important to recognize that not all of the grammatical research here
is directed toward the same goal. In particular, PATR (in its various
incarnations) is designed to be a general formalism for writing and
implementing a variety of different kinds of grammars. Is is not (and
was never intended to be) a theory of natural language structure.
Thus, it doesn't make sense to try to compare it with systems designed
as linguistic theories.
In fact, it seems reasonable at this point to say that there are at
most three grammatical theories currently being used in research at
CSLI, namely, LFG (for Lexical Functional Grammar), HPSG, and what has
recently been dubbed Categorial Unification Grammar (henceforth, CUG).
The following discussion will be limited to a consideration of the
points of similarity and difference among these three.
Let us turn now to the substantive question of what CSLI's grammatical
theories have in common. The most important thing is a shared
conception of how the information carried by a complex linguistic
expression is related to the information carried by its parts. In all
three theories, the structural description of a sentence (or any other
kind of phrase, for that matter) is built up out of the partial
information contained in its constituent parts by identifying certain
pieces with one another. The formalisms employed in the theories
require certain structures to be identical; this technique is used to
encode a wide variety of types of dependencies between linguistic
elements.
The formal mechanism used by all of CSLI's theories to realize this
general idea is the operation of unification, which is simply the
merger of two mutually consistent structures. How it works can be
sketched by considering the phenomenon of subject-verb agreement in
English (which is analyzed in essentially the same way by all of the
theories in question).
In a sentence like "The fish swim," the noun "fish" is third person,
but contributes no information about its number, whereas the verb
"swim" carries the information that its subject must not be third
person singular. These two pieces of partial information are
compatible, so they can unify, resulting in a well-formed sentence
with a third person plural subject. In "*The whale swim," on the
other hand, the noun "whale" is third person singular, so the noun and
verb carry incompatible information, and unification is impossible.
This sort of analysis differs in crucial respects from approaches
which posit a transformation marking the finite verb's agreement with
the subject. Instead of postulating multiple levels, with rules
manipulating their form, CSLI's theories generate surface structures
directly, making use of unification to account for dependencies
between the parts of a sentence.
This common conceptual and formal core is implemented in PATR in a
manner general enough to permit the encoding of analyses drawn from a
number of different theoretical frameworks. While LFG and HPSG both
employ additional mechanisms that are not straightforwardly
formalizable in PATR, PATR has provided a common medium for the
implementation and comparison across theories of the analyses of a
number of linguistic phenomena. This has been a valuable exercise,
and has been one major focus of the Foundations of Grammar project at
CSLI.
Another common property of CSLI's grammatical theories is their
declarative character. While most approaches to syntax in recent
decades have employed sequential derivations in the analysis of
sentence structures, the theories here have been largely
nonprocedural. The grammars themselves establish relations among the
elements in a sentence. This information can be used in different
kinds of processing (e.g., parsing or generating), but it is
inherently nondirectional. Hence, for example, questions of rule
ordering (intrinsic or extrinsic), which have exercised syntacticians
for years, simply do not arise within these theories.
A slightly less abstract property shared by the grammatical theories
under discussion (though not by GPSG) is the central role of the
lexicon as the repository of most linguistic information. Lexical
entries are conceived of as highly articulated structures, containing
most of the syntactic, semantic, and phonological information about
languages. Language-particular rule systems are assumed to be
relatively impoverished (though a variety of mechanisms are posited to
capture redundancies within the lexicon). This tendency is also
evident in work being done elsewhere, including, notably,
Government-Binding theory. Its culmination is the revival of
categorial grammar as a serious theory of natural language syntax. In
this work, exemplified at CSLI by CUG, even the information about how
words combine into phrases is encoded into the lexicon, rather than in
a separate set of phrase structure rules.
One consequence of these high-level, abstract commonalities among
CSLI's theories of grammar is that many of the specific analyses are
rather similar. The near identity of the treatments of agreement has
already been cited. Similarly, they all analyze the English
active/passive alternation as a lexical relation, associating pairs of
verbs whose syntax, semantics, and morphology are systematically
related. The existential "there" is likewise given a lexical
treatment, its distribution being determined by the co-occurrence
restrictions different verbs impose on their subjects and objects.
Even the analysis of control (that is, the identification of "missing"
subjects in the complements to such verbs as "try," "seem,"
"persuade," "believe," and "promise") exhibits certain uniformities
across CSLI's theories: all of them involve lexically identifying one
of a verb's arguments with its complement's subject; no use is made of
movement, deletion, or empty nodes, as is done in many other syntactic
theories.
In each of these cases, the theories differ in the details of their
analyses, but they agree in their general outlines. Viewed from a
perspective broad enough to include such theories as standard
transformational grammar, Government-Binding theory, Relational
Grammar, and GPSG, these similarities among CSLI's theories seem quite
significant.
There remain, however, a number of points of substantial disagreement.
One obvious one is over the difference in LFG between
c(onstituent)-structure and f(unctional)-structure, for CSLI's other
theories make no such distinction. LFG posits a bifurcation of the
syntactic information about a sentence into information about phrase
structure and information about grammatical function; these are
encoded in rather different ways into c-structure and f-structure.
HPSG and CUG, on the other hand, employ a single type of data
structure to represent all grammatical information. This difference
leads to some substantive linguistic issues. For example, do verbs
ever select arguments of particular grammatical categories (requiring,
say, that complements be adjective phrases), or is such selection
strictly a matter of grammatical function, and perhaps semantics
(requiring, say, that complements be any category that can be used
predicatively)? LFG, by making category a c-structure attribute and
doing subcategorization on f-structures, excludes the former
possibility. The other theories would permit it.
Much of what LFG puts into f-structures is included under the
semantics attribute in the structures posited by HPSG and CUG. For
example, LFG treats the control information alluded to above in terms
of primitive (syntactic) notions of "subject" and "object," whereas
the other theories treat this as part of the meaning of the verb.
This reflects a general commitment on the latter's part to a tight fit
between their syntactic and semantic analyses. LFG, on the other
hand, has a separate (and nontrivial) semantic component providing
intepretations for f-structures.
Another obvious difference among these theories is simply the
difference between categorial and rewriting systems. A common
characteristic of LFG and HPSG is that they employ a system of phrase
structure rules. Such rules express the combinatorial principles of
the language. In categorial grammar, these principles are encoded
into the category labels of lexical items. Thus, in CUG, the
definitions of grammatical categories include the information about
how they combine with each other, whereas LFG and HPSG rely on phrase
structure rules to specify how categories are combined. This
difference has been exploited in the analysis of coordinate
conjunction to permit CUG to treat apparent cases of coordination of
nonconstituents (e.g., "Pat gave a book to Chris and a record to Lee,"
where each conjunct--"a book to Chris" and "a record to Lee"--is a
string of phrases, not a single phrase) in the same manner as
consitituent coordination.
There are of course many other differences, at various levels of
detail. Indeed, it would take a book-length piece to do a really
systematic comparison of the theories. Nevertheless, even this
cursory survey is enough to provide an answer, albeit a somewhat
equivocal one, to the question of whether GBAG exists. At this point,
the answer must be that there is no single grammatical theory at CSLI,
but there are several closely related theories. Indeed, the
differences among these theories are probably no greater than can be
found within research that goes under a single theoretical label.
Thus, the varieties of transformational grammar (or perhaps even of
Government-Binding theory) exhibit no less diversity than is found
among LFG, HPSG, and CUG. Moreover, it is clear that the past three
years of research here have produced considerable convergence among
syntacticians, with the result that we are closer to a unified theory
than we were when the Center was founded.
[Footnote: One issue on which the grammarians at CSLI appear to have
widely divergent views is the one addressed in the present essay.
Reactions to an earlier version ranged from the claim that the
differences among the theories were largely illusory to the claim that
they were substantially greater than I made them out to be. While
some colleagues agreed with my overall assessment, it should be noted
that others took strong exception.]
-------
∂17-Oct-86 1448 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 4
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 14:47:29 PDT
Date: Fri 17 Oct 86 13:49:26-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 4
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 4
------------------
PROJECT REPORTS
DESIGNING A SITUATED LANGUAGE
Susan U. Stucky
Project Participants: Curtis Abbott, Jon Barwise, Adrian Cussins,
Mike Dixon, John Etchemendy, Mark Gawron,
John Lamping, Ken Olson, Brian Smith, Susan Stucky
Designing and building the Situated Inference Engine (SIE), a
computational system that engages in situated inference, is a
collaborative project sponsored by STASS and the Embedded Computation
projects. From the beginning, the input of members from both these
projects has been easily explained: members of STASS are interested
because they are developing a theory of inference that includes cases
which depend on circumstance for their interpretation (e.g.,
TO-THE-RIGHT-OF(X)); the Embedded Computation folks are equally
interested because computational practice suggests that the
interpretation of internal structures is similarly contextually
sensitive. (See also the initial report on the SIE in Project Reports:
Embedded Computation, CSLI Monthly, June 1986.) But because the basic
model of the SIE is conversational--of a person issuing utterances to
the SIE, to which the SIE produces appropriate replies, there is a
third dimension, namely its linguistic aspect which, it seems to me,
makes the project of substantial interest to linguists.
As in inference and computation, I assume the situatedness of
language; however, the point is not just that the language the SIE
uses will be situated (that much is true of current natural language
systems). Rather, the interest lies in the SIE's being designed with
two additional purposes in mind: (i) all three, inference, internal
structures, and language will be situated in compatible ways, and (ii)
there is a commitment to develop a common theoretical framework in
terms of which to understand the full interplay among language,
content, and the internal structures, etc.
But in order to see what this comes to, let's spend a moment looking
at the overall structure of the problem. The inital subject domain of
the SIE is one of schedules and calendars; thus we imagine saying to
some ultimate version of the SIE "I have an appointment in an hour
with Bill Miller" or "Am I free for lunch on Wednesday?" And we
imagine its replying in various appropriate ways: "No, that's
impossible, you're scheduled to meet with the dean then" or "Yes, but
remember that you have an appointment at 12:30." That's a pretty
smart scheduler. And for anyone interested in language, its design
brings up a host of issues. Some of these are familiar from natural
language systems of various stripes; others take on a slightly
different cast, traceable, in the end, to our insistence on situated
inference and our stance on computation.
First, there are elements of language that depend for their
interpretation on circumstance. Pronouns are a well-known case: their
interpretation depends on the structure of the discourse and (if the
linguists are right) on the structure of the utterance itself. Tense
is another instance. Basically, we will need an account of the
structure of the language, and of the structure of the discourse and
of the constraints that hold between the two domains. And then we
will need an account of how all of that is related to the situation
being described. In short, we need nothing more or less than a
full-blooded relational account familiar from situation semantics. An
account of this sort will constitute our theoretical account of the
external facts.
Then there is the matter of the internal facts: how the language is
processed and how the language is related to the inference that gets
done. Among other things, we want to get from an utterance u in the
input language to what, following Brian Smith, we will call an
impression i, some internal state in the machine. One possible
constraint is that u and i have the same interpretation, that is, that
u and i describe the same state of the world. (Of course, u might
correspond to one or more i's, and vice versa, but let's stick to a
simple case here.) A subtle but important point is that u and i can't
(by and large) have the same meaning: if we have adopted a relational
account of meaning, then what u is related to (e.g., states of the
world and i) and what i is related to (e.g., states of the world,
other states of mind, ahem, the machine, and u) are likely not to be
the same. This perspective rules out some familiar approaches to
natural language processing, namely, the ones in which a
representation of the syntax of u (R(s,u)) is first computed (e.g., by
parsing), whereupon a representation of the meaning of u (R(m,u)) is
said to be computed from R(s,u), whereupon it is assumed that R(m,u)
is the same as the R(m,i) (the representation of the meaning of i).
Let's get back to u and i. Note that you can't compute the
interpretation of i itself, at least not if it's some state of affairs
in the world. The best you can do is compute a representation of the
interpretation of i. What you really want is i itself. How then do
you get from u to i? Equally important, how do you get from m to u?
And what role does our theoretical account of the external facts play
in this internalization process? Is it embodied in the
internalization process? To whatever extent it is, is it implicitly
embodied or explicitly embodied? To what extent is the structure of u
affected by the internalization process itself?
Finally, if inference is really going to be situated, then we won't be
needing to flesh out (or even necessarily disambiguate) absolutely
everything upon internalization. For instance, we might expect our
situated robot, upon discovering a note on Barwise's door saying "I'm
at lunch" to infer directly that Barwise was not there then and so not
deliver the cup of tea it was carrying; and do this without using a
sort of logical form that has the import of "Jon Barwise is not in his
office at 12:00 p.m. 15 October l986." In other words, we are going
to expect that the SIE "do inference over" situated representations of
some sort. We expect this because of the overlap in the (temporal)
circumstances of the situation of inference and the situation being
reasoned about. The SIE's being in the stuff it reasons about is
precisely what makes it situated.
So whatever are we to make of this? Any sane linguist might, at this
point, have bailed out of the project, for it is obvious that we can't
address all these issues for any natural language. But there is
another alternative, which is familiar to logicians and to computer
scientists, though not to linguists, and that is to design an
artificial language (what I might have been tempted to call a formal
language but for the fact that Barwise and Smith get very exercised if
you use that term). Rather than selecting a fragment of natural
language (which is a more familiar way a linguist controls the amount
of data to be considered), one simply (!) designs in the desired
properties so that the properties are all structurally marked in the
language. I admit that I was skeptical at the beginning--why would
anyone spend time working on an artificial language when you could
work on the real thing instead? But there are good reasons for the
technique and good reasons why linguists should be involved. (And
here, I think Mark Gawron, the other linguist on the project, would
agree.) This technique turns out to have an added advantage above and
beyond the use of a fragment. Because the language is embedded in a
larger system, we can be clearer about which properties are properties
solely due to the structure of the language itself and which
properties of the language are due to its interaction with the rest of
the system. Moreover, we can experiment with various configurations
of these properties.
In the present case, for instance, I want a language whose structure
is related to the structure of the discourse, and to the structure of
the situation being described, one that is internalizable by some
fairly straightforward mechanism, and whose structure is related in
some obvious way to the inference process. Thus, designing the
language consists not only of specifying its structure, but of giving,
relevant to it, both a complete theoretical account of the external
facts and a complete theoretical account of the internal facts. Even
that is a tall order, and we can suppose the language will not be very
interesting in and of itself. (It is wise to anticipate too that even
the whole system will not be very interesting either.) What is
interesting is the theoretical account itself, particularly the
framework in which the theoretical account is instantiated, which can
be often more complex than the ingredient structures themselves.
Again, the point is to see how the responsibility for various
properties gets allocated across the whole system.
As a first cut on the problem, Mark and I have undertaken the design
of Pidgin, which will be the language of the first situated inference
engine, SIE-0. In this first version I have been concerned to include
some natural-language-like devices that seem to be the glue of
conversation. For instance, I have designed in a rudimentary version
of notions like topic of the discourse, and subject and the like.
Similarly, I have added a rudimentary version of what linguists refer
to as contrastive focus as evidenced in English sentences with
"emphatic stress," e.g., "No, I meant this Tuesday, not next Tuesday."
Take the notions of topic and subject that have been bandied about in
linguistic theory for so long. Are they solely properties of the
language; what connections do they have to inference, to the discourse
structure, etc.? Both the notions of topic and focus are properties
that designers of logical or other artificial languages do not
generally design in, even though they seem central to language use.
But, by specifying the effects on all the relevant domains (e.g., the
discourse situation, the described situation, the internalization
process, and the process of inference) one actually begins to work out
a fuller account of natural language and, in some cases, to formulate
new hypotheses about natural language itself.
For example, my first cut at getting at the relations between subject
(of the sentence) and topic (of the discourse) is necessarily a crude
one. Linear order isn't used (as it is in some languages) to
designate the "subject" of the sentence. Instead a term may be
underlined. Position in the string of terms is used to indicate which
argument type is designated of the relation designated by the
predicate. To correctly use the predicate 'eat' in Pidgin, for
example, one puts the term designating the eaten thing in first
position and the term designating the eater in second position (just
to be perverse), and so forth. Then you can choose to underline one
term, which is the "subject of the sentence." There are constraints
(surely too strong for the natural language case, but remember, this
is a simple language) dictating that what is being talked about (i.e.,
the "topic" of the discourse) designates the same individual or object
that is designated by the "subject" of the Pidgin sentence. By
underlining different terms, a Pidgin speaker achieves a primitive
equivalent of the active and passive form of sentences in natural
languages. Being the "subject" and designating the "topic" has other
effects in the system. Pronouns in Pidgin are constrained to be used
to designate only the individual that is the current "topic," where
current topic is further defined by the structure of the discourse.
Those are all external facts about the language and how it is used.
But, of course, we are interested as well in which of these external
facts are explicitly represented by the internalization mechanism
(i.e., what does our SIE explicitly represent about the grammar of the
language?). Here we are developing a a space of theoretical
possibilities, including there being no explicit encoding of the
external facts at all. Still we might expect that internalizing
something that has the property of being a "subject" would be
different from internalizing, say, something having the property of
being a "nonsubject." For instance, if u is about the eater, then i
may be stored in memory "in a different place" than it would otherwise
be. Being a subject would then have some implicit effect, but there
would be no explicit representation of something's having had the
property of being a subject of the sentence internally. The current
device we have given Pidgin to get at something like contrastive focus
is similarly crude, but again, what is interesting is how it figures
in the system as a whole.
Thus the project for the language designer is a broad one, and
involves more than the standard language design in which one provides
only a syntactic specification and a semantic one. The task here is
more complicated: (i) to spell out constraints between the language in
the discourse domain and between language and internalization and
inference, and (ii) to provide a theoretical account. In
experimenting with this more general architecture and in developing
theories for our simple artificial language, we may just learn
something about doing it for the real thing.
% end of part 4
-------
∂17-Oct-86 1453 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 5
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 14:52:50 PDT
Date: Fri 17 Oct 86 13:50:52-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 5
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 5
QUANTIFIERS, REFERRING NOUN PHRASES, AND ELLIPTICAL VERB PHRASES
A Subproject of STASS
Stanley Peters and Mark Gawron
STASS project members Mark Gawron and Stanley Peters are writing a
paper about quantifiers, referring noun phrases including pronouns,
and elliptical verb phrases. They are taking a new look from the
perspective of situation semantics at some old problems that have been
studied under the heading of quantifier scope ambiguities and
ambiguity as to the antecedent of an anaphoric expression (e.g., a
pronoun or an elliptical verb phrase). See, for example, Sag 1977,
Williams 1978, and Bach and Partee 1980. This work also builds in
some distinctions between types of anaphoric function related to those
posited in Evans 1980, Reinhart 1983, and Sells 1985.
The plan is to exploit the relational theory of meaning in explaining
how one utterance of a sentence, e.g., B's utterance in dialogue (1)
(1) A. Are there any canapes left?
B. Yes, nobody ate one mushroom crepe.
can be interpreted as saying that one crepe remains uneaten, while
another utterance of the very same sentence would be interpreted as
saying that all the crepes remain uneaten in dialogue (2).
(2) A. Are all the mushroom crepes still there?
B. Yes, nobody ate one mushroom crepe.
Example (2B) is the type of utterance in which the property people are
denied to have is the (nonparametric) property of eating one mushroom
crepe. Example (2A), in contrast, is the type of utterance in which
the parametric property of eating (the mushroom crepe) c is denied of
people, for one value of the variable c. A central goal of the
analysis is to give an account of how different circumstances can
interact with identical grammatical situations to give different types
of interpretation.
Circumstances also play a crucial role in the account of pronouns and
their uses. The analysis will distinguish among three different fates
that circumstances can dictate for a pronoun: to be used deictically,
coparametrically, or for role-linking -- as exemplified in (3) to (5)
respectively.
(3) A. I hear Mr. Williams had the most popular daughter at the
party.
B. Yeah, John danced with his daughter, and so did about
ten other guys.
(4) A. I hear John had the most popular daughter at the party.
B. Yeah, John danced with his daughter, and so did about
ten other guys.
(5) A. I hear father/daughter dancing was very popular at the
party.
B. Yeah, John danced with his daughter, and so did about
ten other guys.
As these last examples illustrate, there is an interaction between the
contribution the pronoun makes to the interpretation of its clause and
the interpretation that the elliptical verb phrase `so did' gets.
In each case the elliptical verb phrase is interpreted simply as
expressing the same (parametric) property as its antecedent does.
The paper analyzes quantifier ambiguities and anaphoric ambiguities
each on their own terms, and then shows how interactions between the
phenomena, such as those illustrated in (3) to (5), fall out
automatically from the independently motivated analyses. One
particular interaction such an analysis must account for (noted by Sag
and by Williams) is the contrast shown in (6) and (7).
(6) Some teacher read every book.
(7) Some teacher read every book and the principal did too.
Sentence (6) shows an ambiguity similar to that exhibited by (1B) (=
(2B)). Either the (parametric) property ascribed to every book is
that of having been read by some particular teacher t (narrow scope on
`every book'), or the property is that of having been read by some
teacher or other (wide scope on `every book'). In sentence (7),
however, when the elliptical verb phrase `did too' is interpreted
as anaphoric to the verb phrase of the first clause, the wide-scope
reading for `every book' is much less readily available.
A central strategy of the analysis is to account for these various
semantic contrasts by utilizing the circumstances of utterance, and
not by postulating enrichments of syntactic structure. Thus, for
example, in place of coindexing of syntactic structures, Gawron and
Peters assert that circumstances determine coparameterization of their
associated interpretation-types, or similar semantic relationships.
Rules for a fragment of English have been worked out using a
unification-based syntactic and semantic framework -- with
situation-theoretic objects picked out by attribute-value matrices.
% end of part 5
-------
∂17-Oct-86 1507 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 6
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 15:07:39 PDT
Date: Fri 17 Oct 86 14:09:06-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 6
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 6
STRUCTURE OF WRITTEN LANGUAGES
Geoffrey Nunberg
Project Participants: Mike Dixon, David Levy, Geoffrey Nunberg,
Brian Smith, Tayloe Stansbury
As a part of its research, the Analysis of Graphical Representation
project has spun off a subproject to investigate the structure of
written languages, aimed at providing an explicit framework for
describing the characteristic structural properties of written
languages, both natural and constructed, an area in which research has
heretofore been thin on the ground. Over and above the purely
theoretical interest of these issues (the proof of which pudding will
be found in forthcoming eatings), this research is essential to the
development both of adequate natural-language understanding systems,
and of a new generation of generalized editors, capable of dealing in
a coherent way with documents that incorporate expressions from
several varieties of written language.
Previous research on written natural languages has tended to lump
together several varieties that are probably best kept separate. In
particular, we want to distinguish among "written-down" languages
(i.e., languages for which a writing system is available); "developed
written languages" (languages for which specialized conventions have
evolved for written use, generally in concert with the specialization
of the written language to certain communicative functions); and
"published languages" (i.e., languages that have been used for public
communication of the sort made possible by print and subsequent
technologies). We are interested primarily in written languages of
the third sort, which typically have developed a richer and more
specialized apparatus for marking text structures. For example, the
parenthetical (that is, a string like this one) does not appear in
written languages until well after the introduction of print, a
development that makes sense when we consider its function: it marks
off material that is to be incorporated in an "alternate discourse,"
such as may be required if the reader's knowledge base is not that of
the "ideal reader" that the writer had in mind, or if the
circumstances of interpretation are not entirely foreseeable at the
time of writing. These functions are exemplified in:
1. We include as well the null set (set with no members).
2. Oysters (in season)
The remarks that follow should be taken as applying primarily to the
problem of constructing grammars for published languages.
At least two sorts of grammars are relevant to the description of
published languages. One is the `lexical grammar', which accounts for
the distribution of and dependencies among lexical elements of the
language. Research on written language has tended to assume that the
lexical syntax and semantics of written languages are similar or
identical to the grammars of their spoken equivalents. But this
assumption requires qualification. For one thing, the written language
clearly contains semantic (and arguably, syntactic) categories that
are not relevant to grammatical description of the spoken language.
To take an obvious example, we might consider the written-language
category of "proper expressions," which are marked in texts by
capitalization of word-initial letters. This class includes most of
the class of lexical proper names, such as are defined by various
semantic and syntactic criteria, but it also includes a number of
expressions that would be considered common nouns on purely lexical
grounds (`Frenchman', `Distinguished Flying Cross'), as well as many
adjectives (at least in English), such as `Gallic', `Einsteinian'.
Thus whatever semantic (or ethnosemantic) property is associated with
proper expressions must be explicitly marked in the written-language
lexicon.
The lexical grammar for the written language will also contain a set
of presentation rules (or a graphology), which specify not only the
spellings of lexical items (an area in which there has been a fair
amount of recent research), but also the conditions under which
indicators of morphological structure such as hyphens and apostrophes
are to be inserted. It can be argued that these rules form part of a
coherent system together with the rules for presentation of
text-category indicators (see below).
The `text grammar', by contrast, describes the distribution of a set
of categories and relations defined in terms of the informational
structure of the text, broadly construed--sentences, paragraphs,
direct quotations, "colon expansions," "dash interpolations," and the
like. These are marked in documents by `explicit text-category
indicators', among them certain of the familiar marks of punctuation,
format features like indentation and word- and line-spacing, and such
features as font and face changes and sentence-capitalization. In the
course of our research, it has become clear that the properties of the
text grammar are both rich and nonobvious: not only are they described
inadequately in standard style manuals and textbooks that purport to
describe the conventions of the written language, but their mastery by
competent writers appears to be largely a matter of tacit knowledge,
much like the competent speaker's knowledge of the rules of the spoken
language. What is more, there appears to be little grounds for a
widely-repeated assumption that the features marked by punctuation and
the like are derivative from features of spoken prosodies. There is
no prosodic difference that corresponds to the difference between
semicolon and colon junctures, for example, though the latter are
clearly informationally distinct, as shown by:
3a. He told us the news: we were not permitted to speak with the
director ourselves.
3b. He told us the news; we were not permitted to speak with the
director ourselves.
An adequate semantics for text categories should provide analyses of
the discourse functions associated with each type. Take, for example,
the poorly understood notion of the text-sentence -- roughly speaking,
the sort of string we delimit with an initial capital letter and a
sentence-final delimiter such as a period or question mark, as, for
example, in the second sentence in the following passage:
4. There were a number of factors that contributed to this
development. The Stuarts yielded to the Hanovers; the Whigs
arrived at a new parliamentary strategy: they would oppose Court
policy at every turn. The county families began to send their sons
to university as a matter of course.
We want to know how the discourse interpretation is affected by the
inclusion of all of this information into a single text-sentence (as
well as the informational significance of inclusion of subparts of the
sentence in parentheses, dashes, and so forth). How would the
interpretation of this material differ in context, for example, if the
semicolon were replaced with a period, or the colon with a semicolon?
We also want to know what conditions are imposed on the semantic
well-formedness of text-sentences, and in particular, what relation
there is between the discourse role of the text-sentence and the
informational unity of the lexical sentence, which is traditionally
taken as providing the minimal kernel for text-sentence construction.
Analogously, the text-grammar syntax is responsible to describe the
dependencies that hold among text-categories. Our work in this area
has been based on a realization, new to written-language research,
that such dependencies could be stated in ways that were in large
measure independent of information about the lexical parsing
associated with text-constituents. By way of example, we can consider
the interaction of an "attitudinal category" like the parenthetical
with "structural categories" like the sentence, the paragraph or the
text-clause (roughly, the lexical sentences we conjoin or separate
with semicolons). We note, for example, that a parenthetical cannot
be the initial element of a member of a structural category: a
sentence cannot begin with a parenthesized element, a paragraph cannot
begin with one or more nonterminal parenthesized sentence, and so
forth:
5. *(What is more surprising), they carried no lifeboats.
Note also that a parenthetical initiated internal to a member of a
structural category cannot straddle the boundary of that category:
6. *They finally delivered the air conditioner (in mid-December.
Everyone cheered.)
For the explanation of regularities like these, we will want to look
both at the properties of the parser relevant to written-language
interpretation, and the particular interpretive functions associated
with parentheticals. (Note, by contrast, that quotations are not
subject to the same constraint that operates in (2); thus we can
write, for example: <Reagan announced that aid would be increased to
the Nicaraguan "freedom fighters. They're doing the job there so we
don't have to do it here.">.)
Finally, we are investigating the structure of the presentation rules
associated with text-category indicators. These rules provide us with
two sorts of representations of texts: first, as a linear sequence of
elements, and second, as a two-dimensional arrangement on a page.
Rules of the first type specify the contexts in which certain
indicators can be explicitly rendered. Here again, the rules are less
trivial than handbook descriptions would suggest; for example, we note
that the punctuational indicators associated with structural
categories (that is, the comma, semicolon, colon, and period) must all
cliticize to the lefthand word, and that a word can display only one
such indicator; the choice is determined by reference to a precedence
hierarchy of the form: period > semicolon/colon > comma. Another sort
of rule that has some interest is the type required to handle the
alternation of single and double quotes. Significantly, the form of
such rules presupposes a left-to-right parser, in that the "unmarked"
delimiter (in American, double quotes) is used as the outermost
delimiter when quotes are embedded. Note that different conventions
are relevant in mathematical and constructed languages, where the
unmarked delimiter (say, the parenthesis) is used to delimit the
innermost bracketed expression when bracketed elements are nested. In
these cases, we presume, the (ideal human) parser is presumed to
operate bottom-up.
The rules of two-dimensional presentation, by contrast, are concerned
with such notions as "line," "page," and "margin." It is these, for
example, that determine in which format contexts such indicators as
word-spacing and paragraph-spacing will be presented. Here again, we
are particularly concerned to arrive at understandings that are
sufficiently general to apply both to natural languages and
constructed languages of various types (mathematics or musical
notation, for example), so as to be able, say, to define a notion of
"widow" (i.e., a single line of text isolated from the rest of its
paragraph by a page boundary, or a single line of pretty-printed code
in a document that is separated by a page break from the rest of the
expression that contains it).
% end of part 6
-------
∂17-Oct-86 1522 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 7
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 15:22:21 PDT
Date: Fri 17 Oct 86 14:11:04-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 7
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 7
SUMMER MEETINGS OF GTDS
Peter Sells
The summer meetings of the GTDS group were primarily reading sessions,
on the topic of cross-linguistic variation in reflexive (and other
related) constructions. Works by von Bremen, Faltz, and Yang
suggested general approaches to a typology of reflexives, while more
particular studies by Sigurdsson, Croft, Shyldkrot and Kemmer, and
Lichtenberk on extended uses of reflexives illustrated the interaction
of reflexive constructions with semantic factors (such as
logophoricity) and with morpho-syntactic factors (such as the use of a
reflexive as a mark of intransitivity). Finally, works by Lebeaux and
Kiparsky suggested certain theoretical approaches to the problem of
cross-linguistic variation.
There already exists at CSLI a considerable amount of material written
on the properties of reflexive constructions in a wide variety of
languages, and the summer meetings were intended as a precursor to a
series of working-group meetings in the academic year, in which
current work in progress will be presented and reviewed, and brought
together as a volume of working papers.
------------------
CSLI SITE DIRECTORS
Although CSLI's headquarters is in Ventura Hall at Stanford
University, its research activities are conducted at the sites of all
three founding institutions: SRI International, Stanford University,
and Xerox PARC.
CSLI has recently appointed site directors to administer CSLI-related
research and staff at the individual sites and to assure coordination
across sites. Tom Wasow, Stanford Professor of Linguistics and
Philosophy, is the director for the Stanford site; David Israel,
researcher in artificial intelligence, is the director for the SRI
site; and Brian Smith, researcher in computer science and philosophy,
is the director for the PARC site.
John Perry, CSLI's previous director, is organizing an Advisory
Committee that will be composed of past directors, site directors, and
senior executives from the three founding institutions. The Advisory
Committee will be concerned with long-range plans for CSLI's structure
and will serve as a resource development group.
------------------
NEW POSTDOCTORAL FELLOWS 1986/87
CSLI is pleased to announce two new postdoctoral fellows, Adrian
Cussins and Craige Roberts.
Cussins received his D.Phil. in 1986 from Oxford University where he
worked in the areas of philosophy of mind, philosophy of language, the
philosophy of psychology, and the theory of content. He hopes to
benefit from CSLI's broad interdisciplinary community and to exploit
his own Oxford philosophical and Edinburgh cognitive science
backgrounds to develop a computational model of nonconceptual
representational processes.
Roberts received her Ph.D. in 1986 from the University of
Massachusetts at Amherst, where she wrote her dissertation on "Modal
Subordination, Anaphora, and Distributivity." She is interested in
the interdisciplinary work on anaphora and discourse being carried out
at CSLI, and hopes to continue her own work on the logical structure
of discourse.
------------------
CSLI VISITING SCHOLARS
Abdelkader Fassi Fehri
Professor of General and Arabic Linguistics
Mohammed V University, Rabat
Dates of visit: September -- October 1986
Fassi Fehri is currently working on the nature of relation-changing
affixes, their homonymy, and the implications of the processes and
correlations induced by affixation for a theory of lexical
organization.
Boaz Lazinger
Director
Division of Computers and Technology
National Council for Research and Development
Ministry of Science, Jerusalem
Dates of visit: July 1986 -- July 1987
Lazinger is studying a systems approach to natural language
understanding, deductive reasoning in document retrieval systems, and
NLP interfaces to existing software.
Peter Ludlow
Dates of visit: September 1986 -- September 1987
Ludlow's work is primarily centered on developing computationally
tractable semantic theories for natural language. In particular, he
is interested in developing a tractable semantics for intensional
contexts and for quantification.
Gordon Plotkin
Department of Computer Science
University of Edinburgh
Dates of visit: August -- October 1986
Plotkin returned to CSLI to continue his work on building models of
situation theory using techniques from domain theory.
Kasper Osterbye
University of Aarhus
Dates of visit: September 1986 -- September 1987
Osterbye's recent work has been on programming languages, especially
dealing with interactive higher-level debugging. At CSLI he is
participating in the Semantics of Programming Languages Project.
Torben Thrane
Center for the Computational Study of the Humanities
University of Copenhagen
Dates of visit: October 1986
Thrane's current work centers on text understanding and anaphoric
resolution, with particular interest in "the universe of discourse."
He is working on a paper in which he investigates the possibilities of
structuring a discourse universe in a way compatible with current
proposals in situational semantics concerning states of affairs,
situations, and situation types.
------------------
ANNOUNCEMENT
AND CALL FOR PAPERS
A meeting on theoretical interactions of linguistics and logic,
sponsored by the Association for Symbolic Logic and the Linguistic
Society of America, will be held at Stanford University on 10 and 11
July 1987. The organizing committee is soliciting abstracts for
presentation at the conference in three categories:
o contributed abstracts of at most 300 words for fifteen-minute
presentations;
o contributed abstracts of at most 1000 words for forty-minute
presentations;
o suggestions for symposia.
Suggestions for symposia are due on 1 February 1987, and all abstracts
are due on 1 March 1987. All communications should state whether the
speakers are members of the LSA or the ASL (or neither), and should be
sent to the following address:
ASL/LSA 87
Richmond H. Thomason
Department of Linguistics
University of Pittsburgh
Pittsburgh, PA 15260
Netmail can be directed to thomason@c.cs.cmu.edu.arpa.
------------------
DEPARTMENT OF INDIRECT SPEECH ACTS
(from the New York Times, 7/24/86)
When Twyla Tharp first applied for a dance grant in the 1960s, she was
neither accustomed to the business of raising money nor enthusiastic
about it. As the story goes, her application said: "I write dance,
not grants. Please send money." Miss Tharp got her grant.
% end of part 7
-------
∂17-Oct-86 1533 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:1, part 8 (and last)
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 15:33:16 PDT
Date: Fri 17 Oct 86 14:12:04-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:1, part 8 (and last)
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 8
------------------
CSLI PUBLICATIONS
The following reports have recently been published. They may be
obtained by writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford,
CA 94305 or publications@csli.stanford.edu.
51. Noun-Phrase Interpretation
Mats Rooth
52. Noun Phrases, Generalized Quantifiers and Anaphora
Jon Barwise
53. Circumstantial Attitudes and Benevolent Cognition
John Perry
54. A Study in the Foundations of Programming Methodology:
Specifications, Institutions, Charters and Parchments
Joseph A. Goguen and R. M. Burstall
55. Quantifiers in Formal and Natural Languages
Dag Westerstahl
56. Intentionality, Information, and Matter
Ivan Blair
57. Graphs and Grammars
William Marsh
58. Computer Aids for Comparative Dictionaries
Mark Johnson
59. The Relevance of Computational Linguistics
Lauri Karttunen
60. Grammatical Hierarchy and Linear Precedence
Ivan A. Sag
------------------
APROPOS
Editor's note
Apropos is a new column to include pieces about a variety of issues
loosely related to CSLI's areas of research. The opinions expressed
here are not necessarily those of CSLI or the editor. We invite our
readers to submit responses and other pieces by writing to the editor
at CSLI, Ventura Hall, Stanford University, Stanford, CA 94305 or by
sending electronic mail to MONTHLY-EDITOR@csli.Stanford.edu.
The following article, by one of CSLI's researchers, appeared in a
somewhat briefer version in the New York Times, Thursday, 2 October
1986.
AN "OFFICIAL LANGUAGE" FOR CALIFORNIA?
Geoffrey Nunberg
Strange as it may seem, the people of the State of California -- the
creators of Marinspeak and Valley Girl Talk -- will be voting this
fall on a measure intended to protect the English language in the face
of baneful foreign influences. Proposition 63 amends the state
constitution to make English California's "official language," and to
prevent state business from being transacted in any other tongue. The
vote is the most important test to date for ex-Senator S. I.
Hayakawa's "U.S. English" organization, whose ultimate goal is to
attach a similar amendment to the U.S. constitution. The
English-firsters can already claim credit for the passage of official
language measures by the legislatures of two states, but the
California proposal is the first time the issue has been put to a
popular vote or has received wide national attention.
The early surveys have shown a majority of voters as favoring
Proposition 63, many of them, apparently, on the assumption that it is
relatively innocuous. But the measure doesn't simply recognize
English as the official state language in the way one might recognize
"California, Here I Come" as the official state song. It specifically
requires the legislature to take all necessary steps to "preserve and
enhance" the role of English as the common state language, and enjoins
it from taking any action that "diminishes or ignores" that role. No
one is quite sure how the courts or legislature will interpret this,
but attorneys on both sides have suggested that it could be used to
end all bilingual education programs, as well as to prohibit the use
of other languages in everything from employment compensation hearings
to government publications and public-service announcements.
The argument most frequently offered for the English language
amendment is that immigrants "will not take the trouble" to learn
English if the government makes services available in other languages.
In a short time, proponents say, we can look forward to having large
permanent non-English-speaking communities in our midst, with the
prospect of separatist movements and ensuing "language wars."
This is not the first time in American history that such spectres have
been raised. Throughout much of the nineteenth century, bilingual
public instruction and administration were common in large parts of
the country. The wave of xenophobic hysteria around the time of the
First World War led to numerous efforts to restrict both immigration
and the use of foreign languages, both perceived as threats to the
Republic. In 1923, for example, the Nebraska supreme court upheld a
law prohibiting foreign language teaching to public-school students,
on the grounds that such instruction would "inculcate in them the
ideas and sentiments foreign to the best interests of this country."
In retrospect, this was all quite silly. The children and
grandchildren of earlier immigrants are proficient in English, and the
pockets of bilingualism that still exist -- among the Pennsylvania
Dutch, the Cajuns, the Finns of Michigan's Upper Peninsula, or Lake
Wobegon's celebrated "Norwegian bachelor farmers" -- are prized both
by locals and state tourist commissions. But of course the
English-firsters are not concerned about the threat of Pennsylvania
Dutch separatism, nor do they appear to have given much thought to the
way their amendment would affect those indigenous populations -- the
Navaho, Eskimos, and Hawaiians, for example -- who are struggling to
keep their languages alive. (Perhaps Hayakawa intends to exempt such
groups by granting them a special "benign minority" status, so as to
allow the use of Navaho, say, by personnel in a reservation school.) I
suspect proponents of the proposition are not even much bothered by
the wave of new Asian immigrants, who are reassuringly polyglot, and
could not coalesce into a monolithic non-English-speaking community.
Their real target is the large Hispanic communities in areas like
California, the Southwest, and south Florida which are threatening not
only because of their size and concentration, but because they are
seen by many as subject to contagion by foreign political interests,
much as were the Germans and Japanese of earlier generations.
But all the evidence shows that these groups are proceeding exactly as
earlier immigrants did. A 1985 Rand Corporation survey reported that
over 95% of first-generation Mexican-Americans born in the U.S. are
proficient in English, and in fact that over 50% of the second
generation speak no Spanish at all. There are important questions, of
course, as to how we can best ease the acculturation of the new
immigrants. Does it make sense to allow immigrant children to take
their math and social studies courses in their native language until
they have learned enough English to enter the regular English-only
course of study? The bulk of current evidence suggests that it does,
though there is disagreement as to which sorts of programs work best.
But these are scarcely constitutional issues, no more than is the
question of whether arithmetic is best taught via the "new math."
What is beyond dispute is that we need have no fear that America will
become a linguistically fragmented state like Canada, where a large
French community has existed since before the English arrived. (Not
that such a situation is necessarily divisive. Would Senator Hayakawa
rather live in multilingual Switzerland, or in largely monolingual
Lebanon? The English-firsters would do well to keep in mind that
"language wars" tend to erupt precisely when one group tries to impose
its language on another. In Northern Ireland, for example, it is
illegal to use Gaelic on street signs and the like, but the statute
scarcely encourages feelings of national unity. In Canada, by
contrast, talk of separatism has almost entirely disappeared since
official bilingualism was established in 1969.)
The English-firsters appear to have lost sight of the enormous
cultural and economic appeal of English, which have made it the most
widely used language in the world, without any official support.
Indeed, the very notion of an English language amendment must seem
bizarre to foreign communities like the French, who are frantically
and fruitlessly writing laws to keep the influence of English at bay;
to them, English needs protecting about as much as crabgrass. To
anyone familiar with the history of the English-speaking world, in
fact, what is most distressing about the prospect of an English
language amendment is that it demeans our own linguistic traditions.
Men like Samuel Johnson and Noah Webster held that the language should
not be subject to state control or interference. The French might
have their academy, but such institutions were unnecessary and
abhorrent in a democratic society, whose citizens would freely agree
on standards of language. This point was not lost on our Founding
Fathers, who debated and rejected proposals to make English an
official language. It is strange that the modern English-firsters,
most of whom would count themselves conservatives, have no faith in
the ability of English to compete unprotected in the linguistic open
market.
Indeed, if the measure is passed, its main effect will be exactly the
opposite of its ostensible goal: it will make it harder for immigrants
who have not yet mastered English to enter the social and economic
mainstream. Take a recent immigrant who finds a job as an agricultural
worker, or cleaning offices at night, and has little direct contact
with English speakers. The amendment won't do anything to help him
learn the language, but it will deny him help in his own language when
he goes to a state employment agency, or tries to find out about
registering his children at a local school. If some advocates have
their way, it will even be impossible for him to get a driver's
license. (Imagine the Europeans insisting that a truck driver
demonstrate proficiency in four languages before being allowed to haul
a load of oranges from Valencia to Copenhagen.)
The English-firsters like to point out that earlier generations of
immigrants were faced with hardships worse than these, and managed to
acculturate themselves nonetheless. But there was nothing ennobling
about the experience, nor did anyone learn English faster as a result.
It is only through a very long and misted glass that someone can look
back with affectionate nostalgia at the reception that our ancestors
underwent at Ellis Island, and conclude that we owe the same treatment
to more recent arrivals.
% end of part 8 and monthly
-------
∂17-Oct-86 1539 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2, part 2
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 15:38:25 PDT
Date: Fri 17 Oct 86 13:47:05-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2, part 2
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 2
------------------
THE WEDGE
Is LOST a Natural Language?
Brian Cantwell Smith
Here's a little argument. I'm not sure I believe it, but no matter.
If I understand things right, Barwise and Perry should be committed to
it. I'd like to find out whether they are, or have them correct me,
if not.
First, two premises:
1. An important function (if not the important function) of
natural language is to convey information.
2. Natural language is situated, which means that the
interpretation of an utterance is typically a function not
only of the sentence's meaning, but of other contextual
factors as well, including, for example, who is speaking, the
time and place of the utterance, etc.
Now my goal is to apply these insights to the language of scientific
theories, in general, and to LOST (the "Language Of Situation
Theory") in particular.
Who knows quite what theories are. In recent times they've been
viewed as linguistic -- as sets of sentences. But it's easy to
imagine arguing for a more abstract conception, which would allow one
and the same theory to be expressed in different languages -- as
English and Russian, for example. Something along the lines of a set
of propositions. But whatever you think about this, theories
certainly have to be expressed in language. Since these expressions
are presumably intended to convey information to humans, they should
presumably be in languages that humans can understand.
For reasons like this, one can argue that the various theoretical
languages that are used by theorists to present proofs, do
mathematics, summarize scientific insight, etc., are better understood
as extensions of natural language than as "formal" or non-situated.
Barwise, in particular, has argued this explicitly, pointing out for
example that `2+2=4', qua sentence, is in the present tense.
To put this same point another way, many people (especially in the
last fifty years) have understood theories by analogy with axioms in
the first-order predicate calculus: as sets of sentences, the
entailments of which are supposed to be true. But, just as Barwise
and Perry have challenged the adequacy of first-order logic as a
mathematical vehicle for explaining the information content of natural
language, so (presumably) they challenge the adequacy of first-order
logic as a metaphorical basis on which to understand the language of
scientific discourse.
Putting all this together, we have a simple conclusion: to the extent
that situation semantics accounts for how natural languages work, that
account should also, by rights, be applicable to languages of
scientific theory, including to the language of situation theory
itself ∪ i.e., to LOST. Thus there's is a certain self-referential
aspect to their enterprise (which, by the way, is fine with me).
So let's look at LOST, for a moment. Many of you will recognize it:
it has lots of double angle brackets. Here's a typical sentence:
s1 |= <<Loves, John, Mary; 1>>
According to the foregoing argument, this language, or future
developments of it, should manifest the crucial properties of natural
language, if it is to serve its function: conveying information to
people about the nature of language, information, and the world. For
example, it should be situated, in a sense that they will presumably
spell out. But if it is to possess all of natural language's
essential properties, doesn't that mean that it should eventually be a
full natural language?
I take it that would be strange. For example, one might well wonder,
if it were true, why we should bother defining LOST in the first
place, rather than starting right off with English, or Warlpiri. But
rather than pursue that line here, I want instead to take the other
tack, and to assume that, no, there *are* ways in which LOST will
differ from other natural languages. The question is what those ways
are.
Here are two possibilities. First, whereas natural languages support
various kinds of speech acts (assertions, commands, queries, promises,
etc.), you might think that a theoretical language would only need to
support assertions. If this were true, LOST might best be
characterized as a restriction of natural language to simple
information conveyance. (Note, by the way, that this is already
beyond the scope of standard first-order logic, in which, I take it,
there is no way to claim anything at all -- utterances have no
assertional force.) On the other hand, as Barwise himself has pointed
out, mathematical proofs, to take just one example, are already more
complex even than that. "Let X be 3," for example, is closer to a
performative than to an assertion. Furthermore, the language some of
us are designing with which to interact with the SIE will have at
least commands and queries, as well as assertions (`SIE' is for
`Situated Inference Engine' -- a computational system being designed
to manifest situated inference). We're doing this in part because of
a belief that it is only with respect to a fuller model of
conversation and action that the true nature of inference will emerge.
And inference, I take it, is an important part of situation theory and
situation semantics. So it's not clear that this first restriction
will hold up.
The second way LOST might differ from natural language is odder. It
has often been pointed out (i.e., I've heard it said; I'm no linguist)
that various lexical classes of English are relatively fixed in
membership, including the prepositions, pronouns, determiners,
conjunctions, etc. In fact, at least to my naive reflection, it seems
that only four classes are open: the nouns, verbs, adjectives, and
adverbs. Viewed in this light, LOST has a very interesting property.
To get at it, note that LOST doesn't exactly have adjectives or
adverbs, but does have a definite class of "predicate" or relation
symbols:
|= -- supports
<< ... ; ...>> -- something like "has as major and minor
constituents, and polarity"
[ ... | ... ] -- the relation between a parameterized soa
and a property
and so on and so forth. I'm not sure how many of these operators
there are at the moment; perhaps a dozen or so. Everything else,
however -- and this is what I find so striking -- occurs in a nominal
(term) position. For example, consider a LOST expression giving the
meaning of the sentence "Bartholomew loves Oobleck":
<<Loves, Bartholomew, Oobleck; 1>>
The English sentence has two nouns (`Bartholomew' and `Oobleck') and
one relation word (`loves'); the LOST expression, in contrast, has
four nominals (`Bartholomew', `Oobleck', `loves', and `1') and one
relational expression (`<< ... >>').
What's my evidence that the first argument to `<< ... >>' is a term
position? Several things. First, I take it you can parameterize out
of that position, as in [ x | <<x, Mary; 1>>] (the type of property
that holds of Mary). Second, the position supports complex
expressions, as in |= << [x|<<Loves,x,John>>], Mary>> (a claim that
the property of loving John holds of Mary). Both of these points
suggest that this position is treated in virtually the same way as any
other, undermining any tendency to analyze it differently. There are
admittedly semantic restrictions on expressions appearing in this
position (they must designate relations) but there are semantic
restrictions on arguments to lots of relations -- first argument to
`loves', for example, must be animate. Furthermore, I don't see that
the constraint holding among the objects that fill the roles of the
`<< ... >>' relation is necessarily directed; it seems instead that
there is merely a mutual constraint: the types of objects designated
by the 2nd through nth arguments must be compatibile with the
appropriateness (and arity) conditions of the relation designated by
the 1st. Is there any reason to suppose that the semantic
ill-formedness of <<Died, 2; 1>> lies heavier on the `2' than on the
`Died'?
(There's another line of argument against my position, having instead
to do with LOST sentences of the form `R(a)', rather than with
`|=<<R,a,1>>'. The former, one might argue, doesn't nominalize the
relation R in the way that the latter does. This seems to me right;
it's just that I haven't seen the `R(a)' form used much. If it is
used, then my sense of a difference is false, and we're back to the
claim that LOST doesn't differ in any salient way from any other
natural language.)
Here's my suggestion. Yes, there is a certain sense in which LOST is
supposed to be a natural language. First, it is designed to be used
as an extension of natural language, so that sentences like the
following make sense: "The meaning of `Bartholomew loves Oobleck' is
<<Loves, Bartholomew, Oobleck; 1>>." Second, when it is extended to
provide a rigorous account of the use and scope of parameters, etc.,
and especially when it is extended to deal with inference, LOST may
well involve "speech acts" above and beyond simple declarative
assertions. Third, statements in LOST, like statements in any natural
language, will inevitably be situated. And there are probably other
similarities as well.
But there is a difference. In spite of the foregoing points of
similarity, LOST statements that express the semantical content of
natural language sentences, in so far as possible, will objectify ∪
i.e., will use a nominal to refer to -- those aspects of the content
of the original utterance that (a) would in the original utterance
have been contributed by the circumstances of use, and (b) would have
been signified in the original utterance by nonnominal constructions
like verbs, predicates, relative pronouns, etc. Because of this heavy
demand on objectification (is objectification what semantics really
is?), LOST should be expected to have lots of nouns, and lots of
nominalization operators.
English has lots of nouns, too, and lots of nominalization operators.
What makes LOST really unique is that every other lexical class will
be fixed, finite, and small.
% end of part 2
-------
∂17-Oct-86 1931 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2, part 2
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 17 Oct 86 19:30:53 PDT
Date: Fri 17 Oct 86 13:47:05-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2, part 2
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
% start of part 2
------------------
THE WEDGE
Is LOST a Natural Language?
Brian Cantwell Smith
Here's a little argument. I'm not sure I believe it, but no matter.
If I understand things right, Barwise and Perry should be committed to
it. I'd like to find out whether they are, or have them correct me,
if not.
First, two premises:
1. An important function (if not the important function) of
natural language is to convey information.
2. Natural language is situated, which means that the
interpretation of an utterance is typically a function not
only of the sentence's meaning, but of other contextual
factors as well, including, for example, who is speaking, the
time and place of the utterance, etc.
Now my goal is to apply these insights to the language of scientific
theories, in general, and to LOST (the "Language Of Situation
Theory") in particular.
Who knows quite what theories are. In recent times they've been
viewed as linguistic -- as sets of sentences. But it's easy to
imagine arguing for a more abstract conception, which would allow one
and the same theory to be expressed in different languages -- as
English and Russian, for example. Something along the lines of a set
of propositions. But whatever you think about this, theories
certainly have to be expressed in language. Since these expressions
are presumably intended to convey information to humans, they should
presumably be in languages that humans can understand.
For reasons like this, one can argue that the various theoretical
languages that are used by theorists to present proofs, do
mathematics, summarize scientific insight, etc., are better understood
as extensions of natural language than as "formal" or non-situated.
Barwise, in particular, has argued this explicitly, pointing out for
example that `2+2=4', qua sentence, is in the present tense.
To put this same point another way, many people (especially in the
last fifty years) have understood theories by analogy with axioms in
the first-order predicate calculus: as sets of sentences, the
entailments of which are supposed to be true. But, just as Barwise
and Perry have challenged the adequacy of first-order logic as a
mathematical vehicle for explaining the information content of natural
language, so (presumably) they challenge the adequacy of first-order
logic as a metaphorical basis on which to understand the language of
scientific discourse.
Putting all this together, we have a simple conclusion: to the extent
that situation semantics accounts for how natural languages work, that
account should also, by rights, be applicable to languages of
scientific theory, including to the language of situation theory
itself ∪ i.e., to LOST. Thus there's is a certain self-referential
aspect to their enterprise (which, by the way, is fine with me).
So let's look at LOST, for a moment. Many of you will recognize it:
it has lots of double angle brackets. Here's a typical sentence:
s1 |= <<Loves, John, Mary; 1>>
According to the foregoing argument, this language, or future
developments of it, should manifest the crucial properties of natural
language, if it is to serve its function: conveying information to
people about the nature of language, information, and the world. For
example, it should be situated, in a sense that they will presumably
spell out. But if it is to possess all of natural language's
essential properties, doesn't that mean that it should eventually be a
full natural language?
I take it that would be strange. For example, one might well wonder,
if it were true, why we should bother defining LOST in the first
place, rather than starting right off with English, or Warlpiri. But
rather than pursue that line here, I want instead to take the other
tack, and to assume that, no, there *are* ways in which LOST will
differ from other natural languages. The question is what those ways
are.
Here are two possibilities. First, whereas natural languages support
various kinds of speech acts (assertions, commands, queries, promises,
etc.), you might think that a theoretical language would only need to
support assertions. If this were true, LOST might best be
characterized as a restriction of natural language to simple
information conveyance. (Note, by the way, that this is already
beyond the scope of standard first-order logic, in which, I take it,
there is no way to claim anything at all -- utterances have no
assertional force.) On the other hand, as Barwise himself has pointed
out, mathematical proofs, to take just one example, are already more
complex even than that. "Let X be 3," for example, is closer to a
performative than to an assertion. Furthermore, the language some of
us are designing with which to interact with the SIE will have at
least commands and queries, as well as assertions (`SIE' is for
`Situated Inference Engine' -- a computational system being designed
to manifest situated inference). We're doing this in part because of
a belief that it is only with respect to a fuller model of
conversation and action that the true nature of inference will emerge.
And inference, I take it, is an important part of situation theory and
situation semantics. So it's not clear that this first restriction
will hold up.
The second way LOST might differ from natural language is odder. It
has often been pointed out (i.e., I've heard it said; I'm no linguist)
that various lexical classes of English are relatively fixed in
membership, including the prepositions, pronouns, determiners,
conjunctions, etc. In fact, at least to my naive reflection, it seems
that only four classes are open: the nouns, verbs, adjectives, and
adverbs. Viewed in this light, LOST has a very interesting property.
To get at it, note that LOST doesn't exactly have adjectives or
adverbs, but does have a definite class of "predicate" or relation
symbols:
|= -- supports
<< ... ; ...>> -- something like "has as major and minor
constituents, and polarity"
[ ... | ... ] -- the relation between a parameterized soa
and a property
and so on and so forth. I'm not sure how many of these operators
there are at the moment; perhaps a dozen or so. Everything else,
however -- and this is what I find so striking -- occurs in a nominal
(term) position. For example, consider a LOST expression giving the
meaning of the sentence "Bartholomew loves Oobleck":
<<Loves, Bartholomew, Oobleck; 1>>
The English sentence has two nouns (`Bartholomew' and `Oobleck') and
one relation word (`loves'); the LOST expression, in contrast, has
four nominals (`Bartholomew', `Oobleck', `loves', and `1') and one
relational expression (`<< ... >>').
What's my evidence that the first argument to `<< ... >>' is a term
position? Several things. First, I take it you can parameterize out
of that position, as in [ x | <<x, Mary; 1>>] (the type of property
that holds of Mary). Second, the position supports complex
expressions, as in |= << [x|<<Loves,x,John>>], Mary>> (a claim that
the property of loving John holds of Mary). Both of these points
suggest that this position is treated in virtually the same way as any
other, undermining any tendency to analyze it differently. There are
admittedly semantic restrictions on expressions appearing in this
position (they must designate relations) but there are semantic
restrictions on arguments to lots of relations -- first argument to
`loves', for example, must be animate. Furthermore, I don't see that
the constraint holding among the objects that fill the roles of the
`<< ... >>' relation is necessarily directed; it seems instead that
there is merely a mutual constraint: the types of objects designated
by the 2nd through nth arguments must be compatibile with the
appropriateness (and arity) conditions of the relation designated by
the 1st. Is there any reason to suppose that the semantic
ill-formedness of <<Died, 2; 1>> lies heavier on the `2' than on the
`Died'?
(There's another line of argument against my position, having instead
to do with LOST sentences of the form `R(a)', rather than with
`|=<<R,a,1>>'. The former, one might argue, doesn't nominalize the
relation R in the way that the latter does. This seems to me right;
it's just that I haven't seen the `R(a)' form used much. If it is
used, then my sense of a difference is false, and we're back to the
claim that LOST doesn't differ in any salient way from any other
natural language.)
Here's my suggestion. Yes, there is a certain sense in which LOST is
supposed to be a natural language. First, it is designed to be used
as an extension of natural language, so that sentences like the
following make sense: "The meaning of `Bartholomew loves Oobleck' is
<<Loves, Bartholomew, Oobleck; 1>>." Second, when it is extended to
provide a rigorous account of the use and scope of parameters, etc.,
and especially when it is extended to deal with inference, LOST may
well involve "speech acts" above and beyond simple declarative
assertions. Third, statements in LOST, like statements in any natural
language, will inevitably be situated. And there are probably other
similarities as well.
But there is a difference. In spite of the foregoing points of
similarity, LOST statements that express the semantical content of
natural language sentences, in so far as possible, will objectify ∪
i.e., will use a nominal to refer to -- those aspects of the content
of the original utterance that (a) would in the original utterance
have been contributed by the circumstances of use, and (b) would have
been signified in the original utterance by nonnominal constructions
like verbs, predicates, relative pronouns, etc. Because of this heavy
demand on objectification (is objectification what semantics really
is?), LOST should be expected to have lots of nouns, and lots of
nominalization operators.
English has lots of nouns, too, and lots of nominalization operators.
What makes LOST really unique is that every other lexical class will
be fixed, finite, and small.
% end of part 2
-------
∂23-Oct-86 0936 EMMA@CSLI.STANFORD.EDU CSLI Calendar
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 23 Oct 86 09:36:06 PDT
Date: Thu 23 Oct 86 08:29:43-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
CSLI (the computer) was down most of yesterday hence the CSLI Calendar
will not be out till later today.
-Emma Pease
-------
∂23-Oct-86 1147 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 23, No. 4
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 23 Oct 86 11:47:00 PDT
Date: Thu 23 Oct 86 10:30:02-PDT
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, October 23, No. 4
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 23, 1986 Stanford Vol. 2, No. 4
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, October 23, 1986
12 noon TINLunch
Ventura Hall Reading: "Circumstantial Attitudes and Benevolent
Conference Room Cognition" by John Perry
Discussion led by David Israel
(Israel@csli.stanford.edu)
Abstract in this week's calendar
2:15 p.m. CSLI Seminar
Redwood Hall HPSG Theory and HPSG Research
Room G-19 Ivan Sag (Sag@csli.stanford.edu)
Abstract in last week's calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, October 30, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Redwood Hall Distributivity
Room G-19 Craige Roberts (Croberts@csli.stanford.edu)
Abstract in this week's calendar
3:30 p.m. Tea
Ventura Hall
--------------
THIS WEEK'S TINLUNCH
Reading: "Circumstantial Attitudes and Benevolent Cognition"
by John Perry
Discussion led by David Israel
October 23, 1986
I will discuss the two main points of Perry's paper (a) efficiency and
(b) incrementality (the unburdening of belief) from a resolutely
design-oriented perspective.
--------------
NEXT WEEK'S SEMINAR
Distributivity
Craige Roberts
October 30, 1986
I will offer a theory of the phenomenon of distributivity, illustrated
by examples such as "Four men lifted a piano". On one reading, the
group reading, the men denoted by the subject lifted a piano together.
On the distributed reading, each of the men has the property denoted
by the predicate. I will propose that distributivity is a property of
predications, combinations of a subject and a predicate. The
predicate need not be the syntactic VP, but may be derived via lambda
abstraction or some comparable mechanism. Distributivity may be
triggered either by a quantificational determiner in the subject NP or
by the presence of an explicit or implicit adverbial distributivity
operator on the predicate. A group reading arises when neither the
subject nor an adverbial element of the predicate contributes the
quantificational force underlying distributivity. It will be shown
that this theory, in conjunction with a theory of the semantics of
plurality along lines suggested by Godehard Link, predicts correct
interpretations for a range of examples, and also permits an account
of anaphoric phenomena associated with distributivity. In addition,
it provides the basis of a simple theory of plural anaphora.
--------------
MORPHOLOGY/SYNTAX/DISCOURSE INTERACTIONS GROUP
The first meeting of the Morphology/Syntax/Discourse Interactions
group this Fall will be on Tuesday October 28, at 12:30 (abstract
and title below). Subsequent meetings will be on Mondays, at 12:30,
on the general topic of anaphora and in several instances on the
particular topic of reflexives. --Peter Sells
Relation-changing Affixes and Homonymy
Abdelkader Fassi-Fehri
October 28, 12:30, Trailer Classroom, CSLI
Of special relevance to a natural theory of affixation are the
following questions:
a) What is the exact nature of the changes that a lexical unit
undergoes as the result of an affixation process (role or argument
reduction or increase, valency reorganization, etc.), and which
level of representation is the most appropriate to state these
changes?
b) Given that languages use different systems of homonymic forms of
affixes to express different valencies (or the same valency
organized in different ways), is there a possible account which
will predict which homonymy affixation would be natural, and which
one would be accidental?
We propose a theory of lexical organisation that answers these
questions.
--------------
PIXELS AND PREDICATES
Abstract Film -- A Dynamic Graphic Art Form
Larry Cuba
1:15pm, Tuesday October 28, 1986, CSLI trailers
Paralleling the development of the theatrical film industry, there is
a history of individual artists creating an alternative film art
guided by the esthetics of music and painting rather than drama.
Theatrical-style films are narrative, telling a story. Abstract films
are non-narrative, and use non-representational imagery.
Film artist Larry Cuba will discuss the dynamic graphic art form of
abstract film and present a number of his computer animated films. A
selection of abstract films by other artists produced with
non-computer tecnhiques will also be screened.
8 abstract films will be shown, time permitting:
Larry Cuba: "3/78" 1978,
"Two Space" 1979,
"Caculated Movements" 1985
Oscar Fischinger: "Composition In Blue" 1935,
"Allegretto" 1936
Norman Mclaren: "Synchromy" 1972
Paul Glabicki: "Five Improvisations" 1980
Bill Yarrington: "Chants/chance" 1983
--------------
SYNTAX OF SOUTH ASIAN LANGUAGES WORKSHOP
A workshop on the syntax of South Asian languages, organized by Paul
Kiparsky and Mary Dalrymple, will be held at CSLI on October 25 and
26. Non-Stanford participants will include Kashi Wali (Syracuse, New
York), P. J. Mistry (California State University, Fresno), and Alice
Davison (University of Illinois), as well as K. P. Mohanan, visiting
professor at Stanford University. The schedule of presentations is
posted in the Linguistics Department. Contact Mary Dalrymple or the
Stanford Linguistics Department (dalrymple@csli.stanford.edu) for more
information.
-------
∂28-Oct-86 1244 EMMA@CSLI.STANFORD.EDU Psychology Colloquium
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 28 Oct 86 12:44:09 PST
Date: Tue 28 Oct 86 11:37:38-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: Psychology Colloquium
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
Psychology Colloquium
Jacques Mehler
Centre Nationale de Recherches Scientifiques, Paris
"Language processing in French and English."
Wednesday, 3:45 p.m.
Jordan Hall, Room 50 (in the basement)
-------
∂30-Oct-86 1456 EMMA@CSLI.STANFORD.EDU CSLI Calendar, October 30, No. 5
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 30 Oct 86 14:54:52 PST
Date: Thu 30 Oct 86 13:49:14-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, October 30, No. 5
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
(Sorry for the delay; Turing was down for 18 hours)
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
October 30, 1986 Stanford Vol. 2, No. 5
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, October 30, 1986
12 noon TINLunch
Ventura Hall No TINLunch this week
Conference Room
2:15 p.m. CSLI Seminar
Redwood Hall Distributivity
Room G-19 Craige Roberts (Croberts@csli.stanford.edu)
Abstract in last week's calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, November 6, 1986
12 noon TINLunch
Ventura Hall Reading: "Concepts of Language" by Noam Chomsky
Conference Room Discussion led by Thomas Wasow
(Wasow@csli.stanford.edu)
Abstract in this week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall The Construction of Thought
Room G-19 Adrian Cussins (Adrian@csli.stanford.edu)
Abstract in this week's Calendar
3:30 p.m. Tea
Ventura Hall
--------------
NEXT WEEK'S TINLUNCH
Reading: "Concepts of Language" by Noam Chomsky
Chap. 2 of "Knowledge of Language: Its Nature, Origin, and Use"
discussion led by Thomas Wasow
November 6, 1986
Chomsky argues against concepts of language that treat it as something
external to the speaker; language, so conceived, is alleged to be an
"epiphenomenon." Instead, Chomsky says that the object of study in
linguistics should be the internalized knowledge of the speaker--that
is, what he has previously called grammar and now refers to as
"I-language." This, he claims, is more concrete, since it has a
physical reality in the "mind/brain." His position seems to be at
odds with the claim (frequently made around here) that language is
"situated" and should not be studied apart from its context of use.
Are these views really incompatible, and, if so, who is wrong?
--------------
NEXT WEEK'S SEMINAR
The Construction of Thought
Adrian Cussins
November 6, 1986
How could the physical world make available the transition between a
way of being which does not admit experience or thought to a way of
being which does? How could it be that `in' the world there are
things which think `about' the world?
I shall outline my conception of what it would be to provide a
psychological theory that answers these questions and I shall consider
the theory's relation to philosophical, linguistic, neurophysiological
and computational accounts.
I shall leave a couple of copies of my thesis with the receptionist
should anyone want further details, but no reading will be
presupposed.
-------
∂05-Nov-86 1835 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 6, No. 6
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 5 Nov 86 18:35:32 PST
Date: Wed 5 Nov 86 17:13:08-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, November 6, No. 6
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 6, 1986 Stanford Vol. 2, No. 6
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, November 6, 1986
12 noon TINLunch
Ventura Hall Reading: "Concepts of Language" by Noam Chomsky
Conference Room Discussion led by Thomas Wasow
(Wasow@csli.stanford.edu)
Abstract in last week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall The Construction of Thought
Room G-19 Adrian Cussins (Adrian@csli.stanford.edu)
Abstract in last week's Calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, November 13, 1986
12 noon TINLunch
Ventura Hall Reading: "Information and Circumstance"
Conference Room by Jon Barwise
Discussion led by Curtis Abbott
(Abbott.pa@xerox.com)
Abstract in this week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall "Quantified and Referring Noun Phrases, Pronouns,
Room G-19 and Anaphora"
Stanley Peters and Mark Gawron
(Peters@csli.stanford.edu, Gawron@csli.stanford.edu)
Abstract in this week's Calendar
3:30 p.m. Tea
Ventura Hall
--------------
NEXT WEEK'S TINLUNCH
Reading: "Information and Circumstance"
by Jon Barwise
Discussion led by Curtis Abbott
November 13, 1986
This paper is partly a reply to a paper of Fodor's and partly an
exploration of situated inference. The first aspect is relevant to
the embedding circumstance of this TINLunch, since Barwise will be
leading a discussion of Fodor's reply to the reply next week, but I
hope to focus our discussion this week on situated inference.
Situated inference occurs among speakers of situated languages,
languages in which the content of utterance, and therefore the
validity of inferences, may depend on embedding circumstances.
Barwise locates some of the mismatches between formal and everyday
reasoning in the ability to exploit shifting circumstances that is
available in situated inference. He describes cross-cutting
distinctions between what is articulated in an utterance and what is a
constituent of its content and, building on this, suggests several
mechanisms for situated inference. Barwise clearly views situated
language and inference as generalizations of their formal
counterparts. Questions we might want to explore include whether a
more elaborate taxonomy of mechanisms for situated inference is a
priority, and how we ought to understand the proper role of formal
language and inference in this generalized setting.
--------------
NEXT WEEK'S SEMINAR
"Quantified and Referring Noun Phrases, Pronouns, and Anaphora"
Mark Gawron and Stanley Peters
November 13, 1986
A variety of interactions have been noted between scope ambiguities of
quantified noun phrases, the possibility of interpreting pronouns as
anaphoric, and the interpretation of elliptical verb phrases.
Consider, for example, the following contrast, first noted in Ivan
Sag's 1976 dissertation.
(1) John read every book before Mary did.
(2) John read every book before Mary read it.
The second sentence is interpretable either to mean each book was
read by John before Mary, or instead that every book was read by John
before Mary read any. The first sentence has only the former
interpretation.
The seminar will describe developments in situation theory
pertinent to the semantics of various quantifier phrases in English,
as well as of `referring' noun phrases including pronouns, and of
anaphoric uses of pronouns and elliptical verb phrases. We aim to
show how the theory of situations and situation semantics sheds light
on a variety of complex interactions such as those illustrated above.
--------------
MORPHOLOGY/SYNTAX/DISCOURSE INTERACTIONS
Long-Distance Reflexivization and Focus in Marathi
Mary Dalrymple
12:30, Monday, November 10
Ventura Trailers
Marathi, an Indo-Aryan language, has two reflexives: long-distance
`aapaN' and short-distance `swataah'. The long-distance reflexive may
appear in subordinate clauses when its antecedent is the subject of a
higher clause; it may appear only in certain positions in simple
clauses. The short-distance reflexive may appear in simple clauses
and in subject position in tensed subordinate clauses.
I will discuss the basic properties of the two reflexives and give
an LFG-style feature analysis that accounts for their distribution. I
will also discuss some examples which show that the distribution of
the long-distance reflexive changes when focusing is involved.
-------
∂12-Nov-86 1647 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 13, No. 7
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 12 Nov 86 16:47:03 PST
Date: Wed 12 Nov 86 16:15:54-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, November 13, No. 7
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 13, 1986 Stanford Vol. 2, No. 7
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, November 13, 1986
12 noon TINLunch
Ventura Hall Reading: "Information and Circumstance"
Conference Room by Jon Barwise
Discussion led by Curtis Abbott
(Abbott.pa@xerox.com)
Abstract in last week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall "Quantified and Referring Noun Phrases, Pronouns,
Room G-19 and Anaphora, Part I"
Stanley Peters and Mark Gawron
(Peters@csli.stanford.edu, Gawron@csli.stanford.edu)
Abstract in last week's Calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR NEXT THURSDAY, November 20, 1986
12 noon TINLunch
Ventura Hall Reading: "The Situated Grandmother"
Conference Room by Jerry Fodor
Discussion led by Jon Barwise
(Barwise@csli.stanford.edu)
Abstract in this week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall "Quantified and Referring Noun Phrases, Pronouns,
Room G-19 and Anaphora, Part II"
Stanley Peters and Mark Gawron
(Peters@csli.stanford.edu, Gawron@csli.stanford.edu)
Abstract in this week's Calendar
3:30 p.m. Tea
Ventura Hall
--------------
ANNOUNCEMENT
There will be no Calendar and no activities on Thursday, November 27
because of Thanksgiving.
--------------
NEXT WEEK'S TINLUNCH
Reading: "The Situated Grandmother"
by Jerry Fodor
Discussion led by Jon Barwise
November 20, 1986
This is a reply by Fodor to my paper "Information and Circumstance"
which was discussed at last week's TINLunch. In my paper (itself a
reply to his commentary "Information and Association") I argued that
natural inference was situated, not formal. In this paper, Fodor
argues that natural inference, though situated, is nevertheless also
formal. In making this argument, Fodor introduces a new "explicitness
condition" on what it means for something to be explicitly, as opposed
to implicitly, represented.
--------------
NEXT WEEK'S SEMINAR
Quantified and Referring Noun Phrases, Pronouns Anaphora
Mark Gawron and Stanley Peters
November 13 and 20, 1986
A variety of interactions have been noted between scope ambiguities
of quantified noun phrases, the possibility of interpreting pronouns
as anaphoric, and the interpretation of elliptical verb phrases.
Consider, for example, the following contrast, first noted in Ivan
Sag's 1976 dissertation.
(1) John read every book before Mary did.
(2) John read every book before Mary read it. The second sentence
is interpretable either to mean each book was read by John before
Mary, or instead that every book was read by John before Mary read
any. The first sentence has only the former interpretation.
The seminar will describe developments in situation theory
pertinent to the semantics of various quantifier phrases in English,
as well as of `referring' noun phrases including pronouns, and of
anaphoric uses of pronouns and elliptical verb phrases. We aim to
show how the theory of situations and situation semantics sheds light
on a variety of complex interactions such as those illustrated above.
(This seminar is a continuation of the seminar held on November 13.)
-------
∂19-Nov-86 1750 EMMA@CSLI.STANFORD.EDU CSLI Calendar, November 20, No. 8
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 19 Nov 86 17:50:18 PST
Date: Wed 19 Nov 86 17:04:12-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, November 20, No. 8
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
November 20, 1986 Stanford Vol. 2, No. 8
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THIS THURSDAY, November 20, 1986
12 noon TINLunch
Ventura Hall Reading: "The Situated Grandmother"
Conference Room by Jerry Fodor
Discussion led by Jon Barwise
(Barwise@csli.stanford.edu)
Abstract in last week's Calendar
2:15 p.m. CSLI Seminar
Redwood Hall "Quantified and Referring Noun Phrases, Pronouns,
Room G-19 and Anaphora, Part II"
Stanley Peters and Mark Gawron
(Peters@csli.stanford.edu, Gawron@csli.stanford.edu)
Abstract in last week's Calendar
3:30 p.m. Tea
Ventura Hall
←←←←←←←←←←←←
CSLI ACTIVITIES FOR THURSDAY, DECEMBER 4, 1986
12 noon TINLunch
Ventura Hall Reading: to be announced
Conference Room Discussion led by Annie Zaenen
(Zaenen.pa@xerox.com)
Abstract in the next Calendar
2:15 p.m. CSLI Seminar
Redwood Hall Rational Agency
Room G-19 David Israel
(Israel@csli.stanford.edu)
Abstract in the next Calendar
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Talk
Redwood Hall Rationality and Politeness
Room G-19 Prof. Asa Kasher
University of Tel Aviv, Dept. of Philosophy
Abstract in the next Calendar
--------------
ANNOUNCEMENT
There will be no Calendar and no activities on Thursday, November 27
because of Thanksgiving.
--------------
MORPHOLOGY/SYNTAX/DISCOURSE INTERACTIONS GROUP
Binding in Russian
Masayo Iida
12:30, Monday November 24
Ventura Conference Room
The reciprocal `drug druga' and the reflexive `sebja' in Russian are
anaphors, in the sense that they must have an syntactic antecedent. In
GB an anaphor is represented as [+a, -p], which predicts that they
show the same binding properties. However Russian reciprocal and
reflexive pronouns behave differently.
I will discuss binding in Russian in the LFG framework. The
binding theory of LFG is characterized as a feature specification,
represented by three basic features, [subject], [nuclear] and
[logophoric]. Contrary to the GB system of partitioning the class of
anaphors into a certain fixed type, LFG permits anaphors to be
specified with different binding features from one another. Moreover,
the theory employs independent features to encode antecedent selection
and binding domain, which may be used to account for different binding
properties between the reciprocal and the reflexive.
-------
∂24-Nov-86 1223 EMMA@CSLI.STANFORD.EDU CSLI Monthly
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 24 Nov 86 12:23:14 PST
Date: Mon 24 Nov 86 11:38:43-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
The CSLI Monthly will be coming either tomorrow or Wednesday. To
reduce the load on the system and on people's mail files, I will not
be sending the monthly to people with stanford or sri accounts on the
assumption (a) they can pick up the hardcopy or (b) they can ftp the
file easily. There will be a few exceptions (i.e., people with
forsythe accounts will receive the monthly, people who receive only
the monthly will receive the monthly).
-Emma Pease
ps. The Monthly will be stored online in <csli>csli-monthly.11-86
pps. This applies only to the monthly, you will continue to receive
the weekly Calendar.
-------
∂25-Nov-86 1732 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 1
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 17:32:00 PST
Date: Tue 25 Nov 86 16:12:08-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 1
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
CSLI MONTHLY
-----------------------------------------------------------------------
November 1986 Vol. 2, No. 2
-----------------------------------------------------------------------
A monthly publication of
The Center for the Study of Language and Information
------------------
Contents
Communication as Rational Interaction
by Philip Cohen 1
The Wedge
Syntax or Semantics? 4
Distributivity
Craige Roberts 5
Representation: A Personal View
Adrian Cussins 6
Symbolic Systems Program
Helen Nissenbaum 7
Postdoctoral Fellowship 7
CSLI Publications 7
------------------
COMMUNICATION AS RATIONAL INTERACTION
Phil Cohen
I was asked to describe my research program with Hector Levesque
(University of Toronto), and to a certain extent with Ray Perrault
(who does not agree with everything that follows), in a short space
for wide circulation. I agreed, realizing only later how hard a job
it was. A worthwhile exercise, to be sure, but one that requires
ruthless editing. For example, the first thing that has to go are the
usual hedges. So, since I will be overstating my case a bit, I hereby
hedge a bit for the remainder of this article. A more complete
exposition can be found in (Cohen and Levesque 1986, 1986a).
We take the view that language use can be productively regarded from
the perspective of action. For us, this provides not just a slogan,
but a program of research, namely, to identify those aspects of
language use that follow from general principles of rational,
cooperative interaction. Our pursuing such a research program does
not mean that we believe all language use is completely and
consciously thought out and planned. Far from it. Rather, just as
there are grammatical, processing, and sociocultural constraints on
language use, so may there be constraints imposed by the rational
balance agents maintain among their beliefs, intentions, commitments,
and actions. Our goals are to discover such constraints, to develop a
logical theory that incorporates them and predicts dialogue phenomena,
and finally to apply them in developing algorithms for human-computer
interaction in natural language.
To pursue this research program, we treat utterance events as special
cases of other events that change the state of the world; utterance
events change the mental states of speakers and hearers. Typically,
utterance events are performed by a speaker in order to affect such
changes. Moreover, they do so because they signal, or carry, (at
least) the information that the speaker is in a certain mental state,
such as intending the hearer to adopt a mental state. Conversations
arise and proceed because of an interplay among agents' mental states,
their capabilities for purposeful behavior, their cooperativeness, the
content and circumstances of their utterances, and surely other
factors to be elucidated. A theory of conversation based on this
approach would explain dialogue coherence in terms of the mental
states of the participants, how those mental states lead to
communicative action, how those acts affect the mental states of the
hearers, etc.
A natural avenue to travel in pursuit of some of these goals would
appear to be speech act theory. After all, here is where theorists
have promoted, and examined in some depth, many of the implications of
treating language as action. Speech act theory was originally
conceived as part of action theory. Many of Austin's insights about
the nature of speech acts, felicity conditions, and modes of failure
were derived from a study of noncommunicative actions. Searle (1969)
repeatedly mentions that many of the conditions he attributes to
various illocutionary acts (such as requests and questions) apply more
generally to noncommunicative action. However, in recent work Searle
and Vanderveken (1985) (hereafter, S&V) formalize communicative acts
and propose a logic in which their properties (e.g., "preparatory
conditions" and "modes of achievement") are primitively stipulated,
rather than derived from more basic principles of action (as S&V in
fact recommend). We believe such an approach misses significant
generalities. Our research shows how to derive properties of
illocutionary acts from principles of rationality, and hence suggests
that the theory of illocutionary acts is descriptive but not
explanatory.
Consider the following seemingly trivial dialogue fragment:
A: "Open the door."
B: "Sure."
Linguistically, these utterances are uninteresting. Of course, the
semantics and effects of imperatives are nontrivial (and I'll get to
that), and the meaning of "Sure" is unclear. But, it seems to me that
the speakers' intentions and the situation of their utterances play
the crucial role in determining what has happened during the dialogue,
and how what has changed can influence agents' further actions. It
would be reasonable to `describe' what has happened by saying that A
has performed a directive speech act (e.g., a request), and that B has
performed a commissive (e.g., a promise). To see that B did, imagine
B's saying "Sure" and then doing nothing. A would surely be justified
in complaining, or asking for an explanation. A competence theory of
communication needs to explain how an interpersonal commitment becomes
established. Ours does so by explaining what effects are brought
about by a speaker's uttering an imperative in a given situation, and
how the uttering of "Sure" relates to those effects. These
explanations will make crucial reference to intention, but not
necessarily to illocutionary acts.
It is tempting to read, or perhaps misread, philosophers of language
as saying that illocutionary force recognition is required for
successful communication. Austin (1962) and Strawson (1964) require
"uptake" to take place. Searle and Vanderveken (Searle 1969, Searle
and Vanderveken 1985} claim that illocutionary force is part of the
meaning of an utterance, and the intended effect of an utterance is
"understanding." Hence, because hearers are intended to understand
the utterance, presumably including at least an understanding of its
meaning, on one reading of their claim, the hearer is intended to
recognize the utterance's illocutionary force. (NOTE: But, perhaps
they mean illocutionary force `potential'. They write: "Part of the
meaning of an elementary sentence is that its literal utterance in a
given context constitutes the performance or attempted performance of
an illocutionary act of a particular illocutionary force." (Searle
and Vanderveken 1985, p. 7). The question at issue here is whether,
in a hearer's understanding an utterance and knowing its meaning, the
hearer recognizes (or is intended to recognize) that the specific
utterance in the specific context was uttered with a specific
illocutionary force.)
It is so tempting to read these writers this way that many, including
myself, have made this assumption. For example, computational models
of dialogue (Allen 1979, Allen and Perrault 1980, Brachman et al.
1979} that my colleagues and I have developed have required the
computer program to recognize which illocutionary act the user
performed in order for the system to respond as intended. However, we
now claim that force recognition is usually unnecessary. For example,
in both of the systems mentioned above, all the inferential power of
the recognition of illocutionary acts was already available from other
inferential sources (Cohen and Levesque 1980). Instead, we claim that
many illocutionary acts can be `defined' in terms of the speaker's and
hearer's mental states, especially beliefs and intentions. As such,
what speakers and hearers need only do is to recognize the speaker's
intentions (based on mutual beliefs). Contrary to other proposed
theories, we do not require that those intentions include intentions
that the hearer recognize precisely which illocutionary act(s) were
being performed.
Although one can `label' parts of a discourse with names of
illocutionary acts, illocutionary labeling does not constitute an
explanation of a dialogue. Rather, the labeling itself, if reliably
obtained, constitutes data to be explained by constraints on mental
states and actions. That is, one would show how to derive the
labelings, given their definitions, from (for example) the beliefs and
intentions the participants are predicted to have given what has
happened earlier in the interaction. Although hearers `may' find it
heuristically useful to determine just which illocutionary act was
performed, our view is that illocutionary labeling is an extra task in
which conversational participants may only retrospectively be able to
engage.
The stance that illocutionary acts are not primitive, and need not be
explicitly recognized, is a liberating one. Once taken, it becomes
apparent that many of the difficulties in applying speech act theory
to discourse, or incorporating it into computer systems, stem from
taking these acts too seriously---i.e., as primitives.
-------
∂25-Nov-86 1822 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 2
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 18:22:19 PST
Date: Tue 25 Nov 86 16:12:53-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 2
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
FORM OF THE ARGUMENT
We show that at least some illocutionary acts need not be primitive by
deriving Searle's conditions on various illocutionary acts from an
independently motivated theory of action. The realm of communicative
action is entered following Grice (1969): by postulating a correlation
between the utterance of a sentence with a certain syntactic feature
(e.g., its dominant clause is an imperative) and a complex
propositional attitude expressing the speaker's intention. As a
result of the speaker's uttering a sentence with that feature under
certain conditions the hearer thinks it is mutually believed that the
speaker has the attitude. Because of general principles governing
beliefs and intentions, other consequences of the speaker's having the
expressed intention can be derived. Such derivations will be used to
form complex action descriptions that capture illocutionary acts in
that the speaker is attempting to bring about some part of the chain
of consequences by means of bringing about an antecedent. For
example, the action description to be called REQUEST will capture a
derivation in which a speaker attempts to make it the case that (1)
the hearer forms the intention to act because (2) it is mutually
believed the speaker wants him/her to act. The conditions licensing
the inference from (2) to (1) can be shown to subsume those claimed by
Searle (1969) to be felicity conditions. However, they have been
derived here from first principles, and without the need for a
primitive action of requesting. Moreover, they meet a set of adequacy
criteria, which include differentiating utterance form from
illocutionary force, handling the major kinds of illocutionary acts,
modeling speakers' insincere performances of illocutionary acts,
providing an analysis of performative utterances, showing how
illocutionary acts can be performed with multiple utterances, and how
multiple illocutionary acts can be simultaneously performed with one
utterance, and explaining indirect speech acts.
Our approach is similar to that of Bach and Harnish (1979) in its
reliance on inference. A theory of rational interaction will provide
the formal foundation for drawing the needed inferences. A notion of
sincerity is crucial for treating deception and nonserious utterances.
Finally, a characterization of utterance features (e.g., mood) is
required in making a transition from the domain of utterance syntax
and semantics, to that of utterance effects (on speakers and hearers).
There are two main steps in constructing the theory.
C(1) C(2) C(i-1)
C: A ---> E(1) ---> E(2) ---> E(3) ---> ... ---> E(i)
Figure 1: Actions producing gated effects
1. `Infer illocutionary point from utterance form'. The theorist
derives the chains of inference needed to connect the intentions and
beliefs signaled by an utterance's form with typical "illocutionary
points" (Searle and Vanderveken 1985), such as getting a hearer to do
some action. These derivations are based on principles of rational
interaction, and are independent of theories of speech acts and
communication.
Specifically, (referring to Figure 1) assume actions (A) are
characterized as producing certain effects E(1) when executed in
circumstances C. Separately, assume the theorist has either derived
or postulated relationships between effects of type E(i-1) and other
effects, say of type E(i) such that if E(i-1) holds in the presence of
some gating condition C(i-1), then E(i) holds as well. One can then
prove that in the right circumstances, specifically those satisfying
the gating conditions, doing action A makes E(i) true. (NOTE: Another
way to characterize utterance effects is by applying "default logic"
(Perrault 1986).)
2. `Treat illocutionary acts as attempts'. Because illocutionary
acts can be performed with utterances of many different forms, we
abstract away from any specific form in defining illocutionary acts.
Searle (1969) points out that many communicative acts are attempts to
achieve some effect. For example, requests are attempts to get (in
the right way) the hearer to do some action. We will say an agent
`attempts' to achieve some state of affairs E(i) if s/he does some
action or sequence of actions A that s/he intends should bring about
effect E(i), and believes does so. The intended effect may not be an
immediate consequence of the utterance act, but could be related to
act A by some chain of causally related effects. Under these
conditions, for A to be an attempt to bring about E(i), the agent
would have to believe that the gating conditions C(i) will hold after
A, and hence if s/he did A in circumstances C, E(i) would obtain.
This way of treating communicative acts has many advantages. The
framework clarifies the degrees of freedom available to the theorist
by showing which properties of communicative acts are consequences of
independently motivated elements, and which properties are
stipulations. Furthermore, it shows the freedom available to
linguistic communities in naming patterns of inference as
illocutionary verbs. Moreover, it gives technical substance to the
use made of such terms as "counts as," "felicity conditions," and
"illocutionary force." However, it makes no commitment to a reasoning
strategy. For example, the theorist's derivations from first
principles may be encapsulated by speakers and hearers as frequently
used lemmas. Moreover, speakers and hearers may not in fact believe
the gating conditions hold, but may instead assume they hold and
"jump" to the conclusion of the lemma.
Below, I describe how the theory addresses two important kinds of
phenomena.
`Performatives'. We basically follow a Bach and Harnish-style
analysis (Bach and Harnish 1979) in which performative utterances are
treated as declarative mood utterances whose content is that the
utterance event itself constitutes the performance of the mentioned
illocutionary act. Because of the essential use made of the utterance
event in assigning truth conditions, performative utterances are a
clear case of the need for situated language use. Our ability to
handle performatives is met almost entirely because illocutionary acts
are defined as attempts. Since attempts depend on the speaker's
beliefs and intentions, if a speaker sincerely says, for example, "I
request you to open the door," he must believe he did the act with the
requisite beliefs and intentions, and hence the utterance is a
request. Institutionally based performatives work because society
defines attempts by certain people in the right circumstances as
successes, such as judges who say "I now pronounce you husband and
wife." Finally, perlocutionary verbs, e.g., "frighten," cannot be
used performatively because frightening requires success, not a mere
attempt; neither the logic of rational interaction, nor institutions,
make attempts to frighten into frightenings.
`Multiact utterances and Multiutterance acts'. The use of inference
allows both of these phenomena to be addressed. For the first,
observe that there may be many other chains of inference emanating
from the utterance event. Hence, an utterance may be an attempt by
the speaker to achieve many different effects simultaneously, some of
which may be labeled by illocutionary verbs in a given language.
Multiutterance acts are a natural extension of our approach because
action A that brings about the core effects may in fact be a sequence
of utterance acts. This immediately allows the formalism to address
problems of discourse, but specific solutions remain to be developed.
(NOTE: See (Grosz and Sidner 1986) for progress on this front.)
However, notice that these acts are problematic for theories requiring
force recognition for each utterance. It may take five sentences, and
three speaking turns, for a speaker to complete a request. On
force-recognition accounts, the illocutionary force of each utterance
would have to be recognized. However, such theories do not require
that a hearer identify the illocutionary force of the `discourse'
(here, as a request). But, that would be the most important act to be
recognized. Moreover, if such theories did so require, they would
have to provide a calculus of forces to describe how the individual
ones combine to form another.
Many discourse analysts have tried to give such analyses in terms of
sequences of illocutionary acts and discourse "grammars." Apart from
the fact that multiact utterances prevent the structure of a dialogue
from being analyzed as a tree, we believe such analyses are operating
at the wrong level. If illocutionary acts are definable in terms of
mental states, a theory of communication will explain discourse with a
logic of those attitudes and their contents. Thus, one needs to
characterize how the effects of individual utterances accumulate to
achieve more global intended effects. The labeling of individual
utterances as the performance of specific illocutionary acts
contributes nothing to an account of effect accumulation.
To the extent that our analysis is on the mark, then the subject of
illocutionary acts is in some sense less interesting than it has been
made out to be. That is, the interest should be in the nature of
rational interaction and in the kinds of reasoning (especially
nonmonotonic (Kautz 1986, Perrault 1986)) that agents use to plan and
to recognize the intentions and plans of others. Constraints on the
use of particular illocutionary acts in conversation should follow
from the underlying principles of rationality, not from a list of
sequencing constraints (e.g., adjacency pairs). (NOTE: To see that
this is not just a straw man, consider the following passage from
Searle and Vanderveken (1985, p. 11): "But we will not get an adequate
account of linguistic competence or of speech acts until we can
describe the speaker's ability to produce and understand utterances
(i.e., to perform and understand illocutionary acts) in `ordered
speech act sequences' that constitute arguments, discussions, buying
and selling, exchanging letters, making jokes, etc. ... The key to
understanding the structure of conversations is to see that each
illocutionary act creates the possibility of a finite and usually
quite limited set of appropriate illocutionary acts as replies."
[emphasis in original])
To make this more concrete, I shall briefly describe aspects of our
approach to a theory of rational interaction that serves as the
foundation for analyzing communication.
-------
∂25-Nov-86 1931 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 3
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 19:30:56 PST
Date: Tue 25 Nov 86 16:13:34-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 3
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
RATIONAL INTERACTION
Bratman (1986) argues that rational behavior cannot be analyzed just
in terms of beliefs and desires (as many philosophers have held). A
third mental state, intention, which is related in many interesting
ways to beliefs and desires but is not reducible to them, is
necessary. There are two justifications for this claim. First,
noting that agents are resource-bounded, Bratman suggests that no
agent can continually weigh his/her competing desires, and concomitant
beliefs, in deciding what to do next. At some point, the agent must
just `settle on' one state of affairs for which to aim. Deciding what
to do establishes a limited form of `commitment'. We shall explore
the consequences of such commitments.
A second reason is the need to coordinate one's future actions. Once
a future act is settled on, i.e., intended, one typically decides on
other future actions to take with that action as given. This ability
to plan to do some act A in the future, and to base decisions on what
to do subsequent to A, requires that a rational agent `not'
simultaneously believe s/he will `not' do A. If s/he did, the
rational agent would not be able to plan past A since s/he believes it
will not be done. Without some notion of commitment, deciding what
else to do would be a hopeless task.
Bratman argues that intentions play the following three functional
roles:
1) `Intentions normally pose problems for the agent; the agent
needs to determine a way to achieve them.'
2) `Intentions provide a "screen of admissibility" for adopting
other intentions'. Whereas desires can be inconsistent, agents do
not normally adopt intentions that they believe conflict with their
present and future-directed intentions.
3) `Agents "track" the success of their attempts to achieve their
intentions'. Not only do agents care whether their attempts
succeed, but they are disposed to replan to achieve the intended
effects if earlier attempts fail.
In addition to the above functional roles, it has been argued that
intending should satisfy at least the following property:
4) `Agents need not intend all the expected side effects of their
intentions'. We will develop a theory in which expected side
effects are `chosen', but not intended.
Intention as a Composite Concept
We model intention as a composite concept specifying what the agent
has `chosen' and how the agent is `committed' to that choice. First,
consider agents as choosing among their (possibly inconsistent)
desires those s/he wants most. Call these chosen desires, loosely,
goals. (NOTE: Chosen desires are ones that speech act theorists claim
to be conveyed by illocutionary acts such as requests.) By assumption,
chosen desires are consistent. We will give them a possible-world
semantics, and hence the agent will have chosen a set of worlds in
which the goals hold.
Next, consider an agent to have a `persistent goal' if s/he has a goal
(i.e., a proposition true in all of the agent's chosen worlds) that
s/he believes currently to be false, and that will continue to be
chosen at least as long as certain facts hold. Persistence involves
an agent's `internal' commitment over time to his/her choices. (NOTE:
This is not a `social' commitment. It remains to be seen if the
latter can be built out of the former.) For example, the ultimate
fanatic is persistent with respect to believing his/her goal has been
achieved or is impossible. The fanatical agent will only drop his/her
commitment to achieving the goal if either of those circumstances
hold.
Thus, We model intention as a kind of persistent goal---a persistent
goal to do an action, believing one is about to do it, or achieve some
state of affairs, believing one is about to achieve it. When modeled
this way, intention can be shown to have Bratman's functional
characteristics. Although I cannot substantiate that claim here (see
(Cohen and Levesque 1986a) for details) it is instructive to see how
the concept of persistence avoids one of the thornier issues for a
theory of intention---closure under expected consequences.
According to our analysis of goal (as a proposition true in a chosen
set of worlds), what one believes to be true must be true in all one's
chosen worlds. Hence, if one believes `p ] q', `p ] q' is true in all
chosen worlds. So, if one has chosen worlds in which `p', then one
has chosen worlds in which `q'.
Now, consider a case of taking a drug to cure an illness, believing
that as a side effect, one will upset one's stomach. In choosing to
take the drug, the agent has surely chosen stomach distress. But, the
agent did not intend to upset his/her stomach. Using our analysis of
intention, the agent will have adopted a persistent goal to take the
drug. However, the sickening side effect is only present in the
agent's chosen worlds because of a `belief'. Should the agent take a
new and improved version of the drug, and not upset his/her stomach,
s/he could change his/her belief about the relationship between taking
the drug and its gastric effects. In such a case, stomach distress
would no longer be present in the agent's chosen worlds. But, the
agent would have dropped the goal of upsetting his/her stomach for
reasons other than believing it was achieved or believing it was
impossible. Hence, the agent was not committed to upsetting his/her
stomach, and thus did not intend to upset it. (NOTE: If the agent were
truly committed to gastric distress, for instance as his/her indicator
that the drug was effective, then if his/her stomach were not upset
after taking the drug, s/he would ask for a refund.)
This example deals with expected consequences of one's intentions.
What about other consequences? Strictly speaking, the formalism
predicts that agents only intend the logical equivalences of their
intentions, and in some cases intend their logical consequences, and
consequences that they always believe always hold. Thus, even using a
possible-worlds approach, one can develop an analysis that satisfies
many desirable properties of a model of intention. I believe an
approach using situation theory would tighten up the analysis a bit,
so that agents could choose states of affairs (in their technical
sense) rather than entire worlds. However, I would expect that much
of the present analysis would remain.
A useful extension of the concept of persistent goal, upon which one
can define an extended concept of intention, is the expansion of the
conditions under which an agent can give up his/her goal. When
necessary conditions for an agent's dropping a goal include his/her
having other goals (call them "supergoals"), the agent can generate a
chain of goals such that if the supergoals are given up, so may the
subgoals. If the conditions necessary for an agent's giving up a
persistent goal include his/her believing that some `other' agent has
a persistent goal, a chain of interpersonally linked goals is created.
For example, if Mary requests Sam to do something and Sam agrees,
Sam's goal should be persistent unless he finds out Mary no longer
wants him to do the requested action (or, in the usual way, he has
done the action or finds it to be impossible). Both requests and
promises are analyzed in terms of such "interpersonally relativized"
persistent goals.
BACK TO DISCOURSE
To see how all this comes into play in discourse, let us reconsider
the earlier trivial dialogue. Loosely speaking, the effects of
sincerely uttering an imperative in the right circumstances are: the
speaker (A) makes it mutually believed with the hearer (B) that A's
persistent goal is that B form an intention, relative to A's goal that
B do some act, thereby leading B to act. In our example, by
attempting to achieve all these effects, A has requested B to open the
door. B did not have to recognize the imperative itself as a request,
i.e., as an attempt to achieve all these effects. The effects (i.e.,
the mutual belief about A's persistent goals) just needed to hold.
So much for A's utterance. Why does B say "Sure"? We would claim
that B knows that requirements of consistency on agents' persistent
goals and intentions mean that the adoption of a persistent goal
constrains the adoption of others. A cooperative step one can take
for others is to tell them when their persistent goals have been
achieved (so they can be dropped). In the case at hand, A's
persistent goal was B's forming an intention to act, relative to A's
desires. By saying "sure", B has made it mutually believed that he has
adopted that relativized intention. Now, it seems not unreasonable to
characterize making a commitment `to' another person to do something
in terms of making it mutually believed that one has an
intention/persistent goal (i.e., one is internally committed) to do
that action `relative to the other's goals'. (NOTE: We are not trying
to characterize the institutional concept of obligation here, but are
trying to shed some light on its rational underpinnings.) This helps
to explain why one cannot felicitously promise to someone something
one knows he does not want.
B only needs to recognize the illocutionary force of the utterance if
s/he is concerned with why s/he is intended to form his/her intention
(e.g., because of his/her being cooperative, or because of A's
authority). The claim that illocutionary force recognition is crucial
to all communication would say that hearers must reason about how they
are intended to adopt their attitudes. Although I believe people do
not do this frequently, the burden of proof is on those who argue that
such reasoning is necessary to successful communication. (NOTE: Some
illocutionary acts, such as greetings, have no propositional content.
Their effects consist entirely of getting the hearer to recognize that
the speaker was trying to perform that act (Searle and Vanderveken
1985). Thus, at least for these acts, illocutionary act recognition
is required for communication to take place. While admitting this to
be true, we suggest that these acts are the exception rather than the
rule.) Generally speaking, the participants' intentions and the
interactions among those intentions are the keys to dialogue success.
Illocutionary act recognition is mostly beside `that' point.
ACKNOWLEDGMENTS
Many thanks to Herb Clark, David Israel, Martha Pollack and the
Discourse, Intention, and Action group at CSLI for valuable comments.
REFERENCES
Allen, J. F. 1979. A Plan-based Approach to Speech Act Recognition.
Technical Report 131. Department of Computer Science, University of
Toronto, Toronto, Canada.
Allen, J. F., and C. R. Perrault. 1980. Analyzing Intention in
Dialogues. "Artificial Intelligence" 15(3): 143--78.
Austin, J. L. 1962. "How To Do Things With Words." London: Oxford
University Press.
Bach, K., and R. Harnish. 1979. "Linguistic Communication and Speech
Acts". Cambridge, Mass.: MIT Press.
Brachman, R., R. Bobrow, P. Cohen, J. Klovstad, B. L. Webber, and
W. A. Woods. 1979. "Research in Natural Language Understanding".
Technical Report 4274. Cambridge, Mass.: Bolt Beranek and Newman Inc.
Bratman, M. 1986. Intentions, Plans, and Practical Reason. In
preparation.
Cohen, P. R., and H. J. Levesque. Communication as Rational
Interaction. In preparation.
Cohen, P. R., and H. J. Levesque. 1986. Persistence, Intention, and
Commitment. Timberline Workshop on Planning and Practical Reasoning.
Los Altos, Calif.: Morgan Kaufman Publishers, Inc.
Cohen, P. R., and H. J. Levesque. 1980. Speech Acts and the
Recognition of Shared Plans. In "Proceedings of the Third Biennial
Conference". Canadian Society for Computational Studies of
Intelligence, Victoria, B. C., pp. 263--71.
Grice, H. P. 1969. Utterer's Meaning and Intentions. "Philosophical
Review" 68(2):147--77.
Grosz, B. J., and C. L. Sidner. 1986. Attention, Intentions, and the
Structure of Discourse. "Computational Linguistics" 12(3):175--204.
Kautz, H. 1986. Generalized Plan Recognition. In "Proceedings of
the Fifth Annual Meeting of the American Association for Artificial
Intelligence", Philadelphia, Penn.
Perrault, C. R. An Application of Default Logic to Speech Act Theory.
In preparation.
Searle, J. R. 1969. "Speech Acts: An Essay in the Philosophy of
Language". Cambridge: Cambridge University Press.
Searle, J. R., and D. Vanderveken. 1985. "Foundations of
Illocutionary Logic". New York, N. Y.: Cambridge University Press.
Strawson, P. F. 1964. Intention and Convention in Speech Acts. "The
Philosophical Review" v(lxxiii). Reprinted in "Logico-Linguistic
Papers". London: Methuen, 1971.
-------
∂25-Nov-86 2112 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 4
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 21:12:02 PST
Date: Tue 25 Nov 86 16:14:29-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 4
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
------------------
THE WEDGE
Syntax or Semantics?
[Editor's note: Peter Ludlow, a CSLI Visiting Scholar, and some of the
members of the STASS project have kindly given the Monthly permission
to publish this recent exchange of electronic mail messages.]
Date: Sat 15 Nov 86 14:36:24-PST
From: Peter Ludlow <LUDLOW@CSLI.STANFORD.EDU>
Subject: syntax or semantics
To: STASS@CSLI.STANFORD.EDU
All,
I think this might be the right forum to air some concerns that I
have. The concerns involve the amount of burden taken over from
syntax by situation semantics. As far as I'm concerned there is no
problem with the idea that situation semantics should be capable of
doing the work that syntax does with respect to binding, scope, etc.
If some linguists find it more helpful to think about these phenomena
as semantic, then we should provide the resources for them to study
the phenomena as semantic.
My concern is that situation semantics may be becoming a theory in
which these phenomena MUST be treated as semantic. I see the job of
the situation semanticist as providing certain tools to aid ongoing
linguistic inquiry. We know that there is a large class of semantic
phenomena which resists study from a Davidsonian and Montegovian
perspective. It is with respect to these phenomena, I think, that the
situation semanticist should be most concerned.
I can't see the point of forcing a situation-theoretic way of doing
things on syntacticians who are involved in a productive research
paradigm. Frankly, I can't see the difference between the view that
binding theory etc. should be recast in semantics and Hartry Field's
absurd view that physics should be recast in measurement theory
because set theory is epistemologically troublesome.
Peter
-------
Date: Sun 16 Nov 86 04:15:55-PST
From: Jon Barwise <BARWISE@CSLI.STANFORD.EDU>
Subject: Re: syntax or semantics
To: LUDLOW@CSLI.STANFORD.EDU
cc: STASS@CSLI.STANFORD.EDU
Peter,
You need to distinguish situation theory, situation semantics, and any
particular situation semantics account. The former is our theory of
the world of semantic objects. The second is the general program of
applying the first to the analysis of meaningful types of things.
Within this program there is lots of room for competing accounts of
any particular phenomena.
Now there is nothing in either (1) or (2) that FORCES you to treat
scope, say, or coreference as a purely semantical phenomenon. There
will be room for competing accounts, including one where scope and
coreference are indicated in the syntax.
On the other hand, the Relation Theory of Meaning (RTM) is at the core
of situation semantics, and it does give a perspective on language and
meaning which suggests a pretty radical rethinking of the relation
between syntax and semantics. It does not prevent you from putting a
lot of weight on the syntax, but it makes you ask why it belongs
there. And most of us in the group have come to the conclusion that
it is misplaced. If, as the RTM suggests, syntax and semantics are
mutually constraining, but neither prior to the other, then you can
see why accounts that took syntax to be autonomous and prior to
semantics would have to "discover" all kinds of invisible syntactic
features that would be better seen as semantic.
So while situation semantics does not prevent someone from treating
coreference, say, as syntactic, it is hard for me to imagine how
anyone who has understood the perspective could think that it was.
Jon
-------
Date: Sun 16 Nov 86 10:30:46-PST
From: Mark Gawron <GAWRON@CSLI.STANFORD.EDU>
Subject: Ludlow's message
To: stass@CSLI.STANFORD.EDU
It seems as if there are two ways of looking at scientific explanation
-- although there probably aren't two ways of doing it -- as
theory-making or as theory-explaining.
I think the view evolving in the STASS group -- and growing out of the
Relational Theory of Meaning -- is one that leads to useful
theory-explaining. The view that indices shouldn't be thought of as
decorations of syntactic representations isn't a denial that a whole
line of productive research is meaningful -- it's an attempt to
explain that line of research, to show how what indices account for
falls out of the relation between structure, circumstance, and
content. A clear explication of that relation can be useful even if
the interpretation offered is completely compatible with current
formalizations of indices -- in other words, it will be useful even if
our notation of our interpretation turns out to be a notational
variant of the current formalizations. Why? Because indices DON'T
currently have any well-grounded interpretation (either set-theoretic
or measure-theoretic) and currently persist purely as syntactic
decorations. The way we think about our theoretical objects does
influence the way we develop our theories; and certainly thinking
about indices as syntactic objects has had an influence on various
versions of various binding theories. It is completely consistent
with that view -- for example -- that indices might themselves have
internal structure, might be decorated with further decorations, and
there are a number of proposals in the literature to do just that
(Haik 1984, Chomsky 1980, Chametzy 1985).
Summing up: the aim isn't so much to stop the presses on binding
theory as it is to come up with a well-grounded view of the facts at
issue to constrain the ways in which a binding theory might develop.
mark
-------
Date: Mon 17 Nov 86 13:15:05-PST
From: Peter Ludlow <LUDLOW@CSLI.STANFORD.EDU>
Subject: on binding theory etc.
To: STASS@CSLI.STANFORD.EDU
A couple of comments regarding Jon and Mark's replies --
Regarding Jon's remarks, I should note that my worries are not
directed at situation theory, nor situation semantics generally, but
toward certain situation-theoretic accounts -- those which try to
subsume binding theory. I agree that syntax and semantics should be
mutually constraining; I just disagree with the idea that binding
theory (and, while we're at it, scope) is best given a semantic
account.
But perhaps we do not disagree. Let me clarify what I take binding
theory to be a theory of. It is not a theory of what it means for two
NPs to be coreferential(or for one to bind the other), nor is it a
theory of when they will be coreferential. Rather, it is a theory of
constraints on possible interpretations due to the relative positions
of constituents in a p-marker.
Let me illustrate. Binding theory does not tell us whether "Bill" and
"he" are coreferential in a given utterance of "Bill thinks he is
groovy." Rather, binding theory tells us that so far as syntax is
concerned, they can be coreferential. It is the job of the situation
semanticist to determine under what situations "Bill" and "he" ARE
coreferential. Binding theory is not so flexible in other
circumstances. In "Bill grooves on himself," binding theory dictates
that from the point of view of syntax, Bill and the pronoun must
corefer. Here, I think, the situation semanticist has little to add.
Mark's comments, if I may crudely summarize them, suggest that
situation semantics is not in competition with binding theory, but is
a theory of what binding theorists are really studying. I wonder.
If it is a deeper explanation of coreference to say two NPs utilize
the same parameter than to say that they refer to the same object,
then I suppose Mark has a point. Frankly, I can't see that notions
like coreference need an explanation. I've never had a problem
understanding what coreference was.
Mark is right to point out that indices explain nothing, however. As
far as I'm concerned they are just heuristic devices to help us keep
track of the binding facts imposed upon a sentence by the grammar. If
GB grammarians confuse indices for significant portions of the syntax
and define certain grammatical properties off of indices, then, to my
thinking, this is just bad linguistics, and the last thing it needs is
a theory.
Peter
-------
Date: Mon 17 Nov 86 13:31:28-PST
From: Mark Gawron <Gawron@CSLI.STANFORD.EDU>
Subject: Re: on binding theory etc.
To: LUDLOW@CSLI.STANFORD.EDU
I think we're converging, but there still seem to be some points that
need clarifying.
(1) If all that indices were used for in GB was to indicate
coreference, I doubt that explaining them would need much work. The
point is that they're not. The relationship between a wh-operator and
its trace, whatever it is, isn't coreference, nor, in general, do
contraindexing conditions amount to disjoint reference conditions,
even in cases with referential NPs.
(2) The point of referring to work which has given indices internal
structure was not to make a promise that we could "explain" such uses
of indexing, but to show that there were uses of indexing that seemed
to defy explanation, but which were consistent with the view that they
were syntactic decorations. The idea is that if some clear
interpretation underlies their use, people won't do such odd things
with indices.
(3) I agree that the point is to provide a useful discovery vehicle,
and I agree that, to some extent, GB's binding theory has been just
that. And yes, the only real validation of any explanation is to do
just that, and that includes our work on anaphora. If indices were
the only thing at issue, there would probably be small promise of
that. We think there are a number of issues that can be addressed
reasonably well from this perspective...
mark
--------
Date: Mon 17 Nov 86 15:39:03-PST
From: Carl Pollard <POLLARD@CSLI.STANFORD.EDU>
Subject: Re: on binding theory etc.
To: LUDLOW@CSLI.STANFORD.EDU
cc: STASS@CSLI.STANFORD.EDU
Since this is a free-for-all, here is my two cents.
The idea that so-called indices, usually regarded as syntactic in some
ill-defined way, are better thought of as something semantic
(parameters of the types of things that language-use situations
describe) IS a productive hypothesis for guiding research; people who
have worked on long-distance dependencies, anaphora, control,
agreement, etc. within the HPSG framework have found it to be a
perfectly "effective discovery vehicle" (calling a hypothesis that
reminds me of calling a toothpaste an "effective decay-preventive
dentifrice") for the past two years or so.
But it is a somewhat pernicious simplification to describe it as
providing "a semantic account" of binding, as opposed to a syntactic
account. A fundamental principle in situation semantics is that
linguistic meaning is a CONSTRAINT in the technical sense of a
relation between parametrized types, more specifically a relation
between (at least) the type of the utterance situation and the type of
thing (individual, property, situation, etc.) the utterance
describes. Thus aspects of the expression (including, potentially,
syntactic category, configuration, grammatical relations, phonology,
morphology) and aspects of the content (including thematic roles,
anchoring or absorption of parameters, scope, etc.) are MUTUALLY
CONSTRAINING. In particular, a situation semantics-oriented account
of binding would seek to account for the mutual constraints that hold
among the syntactic components of certain language-use situations
(i.e., uses of traces, reflexives and reciprocals, personal pronouns,
ellipses, proper nouns, quantifiers, definite and indefinite
descriptions, etc.) and the parameters of the corresponding content
component that those uses introduce. This is a very different thing,
and in my opinion a much better thing, than giving either a strictly
syntactic or a strictly semantic account -- either of which would be
senseless from the point of view of the Relational Theory of Meaning.
On the other hand, it is very similar -- perhaps just more general --
to "a theory of constraints on possible interpretations due to the
relative positions of constituents in a p-marker."
As far as (co)referentiality is concerned, it is not enough for a
binding theory to say whether or not a given configuration requires or
forbids given pairs of elements from being coreferential. Finer
distinctions are required, as lots of work done within both situation
semantics and discourse representation theory -- by people like Peter
Sells, Craige Roberts, Mats Rooth, Jon Barwise, as well as Gawron and
Peters -- has taken great pains to show, although there is still not a
consensus as to just what the right distinctions are: just consider
"Only Bill grooves on himself." It is not true that the situation
semanticist has little to add to the principle of binding theory that
"Bill" and "himself" must corefer. Neither is it appropriate to
characterize the situation semanticist's job as "to determine under
what situations `Bill' and `he' are coreferential" if the intention of
that characterization is to exclude syntax from the subject matter of
situation semantics (and, presumably, leave that in the hands of
syntacticians of the correct persuasion); syntax and other aspects of
the utterance situation figure in the meaning relation just as much as
the semantic content does.
As far as I can tell, it IS deeper to say (for example) that two NPs
utilize the same parameter than to say that they refer to the same
object (what object is referred to by "it" in "every farmer who owns a
donkey beats it"?). Notions like coreference DO need an explanation,
and many people over the years have had profound difficulties
understanding what it is.
Carl
-------
Date: Tue 18 Nov 86 16:52:21-PST
From: Peter Ludlow <LUDLOW@CSLI.STANFORD.EDU>
Subject: reply to Mark and Carl
To: STASS@CSLI.STANFORD.EDU
WRT Mark's comments,
I agree that we seem to be converging. Mark is right to point out
that binding theory is more than just a theory of coreference. I just
used coreference as an example of the kind of thing I have in mind.
Binding theory is also a theory of what parts of syntax are operators,
what parts are variables, and when a given operator binds a variable.
Now a semanticist (situation or otherwise) will have something
interesting to say about the interpretation of quantifiers. But by
"interpretation of quantifiers" I mean to speak of whether quantifiers
are objectual, substitutional, etc. and questions of how they come to
be interpreted as having a group reading, a distributed reading, or
whatnot. But I guess I wouldn't consider any of these questions to be
questions in binding theory per se.
Mark's second point is that people wouldn't do such odd things with
indices if they had a clear interpretation of how they were being
used. With this I agree, but I think the clear interpretation might
just be a statement of what syntactic relations indices are used to
represent. This is a point that I know Higginbotham has made, and it
seems to me that Susan Stucky has made the same point about syntactic
representations generally. (Right Susan?)
Mark's third point is that situation semantics will prove to be a more
productive paradigm for the study of binding theory facts than the
current syntactic one. Time will tell.
WRT Carl's comments,
I would distinguish a theory of binding (including a theory of index
assignment) from a theory of indices. My point is that the former
should be thought of as syntactic. I don't care about the theory of
indices. I'm not sure what a theory of indices would be and I doubt
that one can make sense of the notion of constructing either a
syntactic or semantic account of what an index is.
Perhaps Carl just means that semantics determines the assignment of
indices. If this is the claim, he is partially right (if one uses
indices to signify, among other things, all cases of coreference), and
I agree that for discourse anaphora and a number of other phenomena,
syntax will be silent on how indices are to be assigned. I should add
that, for me, these phenomena fall outside of binding theory.
Carl's second point, that the discussion has been oversimplified is
perhaps correct. What I don't see is that my view is in conflict with
the idea that syntax and semantics are mutually constraining. This
view is of course implicit even in my remark (quoted by Carl) that
binding theory is a theory of "constraints on possible interpretations
[I should add of pronouns and bound variables] due to the relative
positions of constituents in a p-marker." All I'm saying is that
binding theory is a theory of some of the constraints placed on
interpretation by the syntax. Syntax surely does not provide all the
constraints, and if you like, you can say it only provides 1% of the
constraints. And of course the theory of syntax must be constructed
with the goal of getting the interpretation of sentences right.
Carl is correct to point out that the situation semanticist should not
be excluded from doing syntax. I can see the point that syntactic
objects are situation-theoretic objects. My only concern is that the
contribution of the syntactic object to the meaning of an utterance be
given its full due.
I still don't see how "utilizes the same parameter" is "deeper" than
"refers to the same object." But perhaps I am dense. WRT the "it" of
donkey sentences: it does not refer, but is a bound variable (if Heim
is right the indefinite article is a bound variable here too).
Indices are not in want of explanation, but binding theory facts are.
Remember that it is not reference itself that I am interested in, but
merely the fact that constituents in certain syntactic configurations
must corefer, in other configurations they cannot, and in still other
configurations an operator can bind a variable. Question: Why is it
unsatisfying or unexplanatory to embed binding theory, so understood,
in generative syntax?
--Peter
-------
Date: Wed 19 Nov 86 08:18:14-PST
From: Craige Roberts <Croberts@CSLI.STANFORD.EDU>
Subject: more on indices
To: STASS@CSLI.STANFORD.EDU
Peter says in his last note on indices that "it is not reference
itself that I am interested in, but merely the fact that constituents
in certain syntactic configurations must corefer, in other
configurations they cannot, and in still other configurations an
operator can bind a variable." He then asks: "Why is it unsatisfying
or unexplanatory to embed binding theory, so understood, in generative
syntax?" I am in full agreement with the claim that syntax and
semantics are mutually constraining, and in principle I have no
problem with abstracting away from the facts of interpretation for the
purpose of examining the more purely syntactic constraints on binding
(c-command, f-command, governing category, whatever). But
syntacticians would benefit by paying more attention to the semantic
(and pragmatic) side of the analysis of anaphoric phenomena. For
example, I see a number of problems with the binding theory of the
Government and Binding framework which arise from a failure to take
the interpretation of indices more seriously, as well as a failure to
take into account various facts about focus and other contextually
determined elements of interpretation. For example, Gareth Evans,
Tanya Reinhart, and others have all pointed out that certain examples
which the binding theory predicts to be ungrammatical are in fact
quite acceptable in the proper context, perhaps with the proper
intonation, etc. One's theory of binding, even if only a theory of
the relevant syntactic constraints on the sentential level, has to be
consistent with what we find in larger contexts. I assume that Mark
had things like this in mind when he said that a theory of anaphora
should have a "pragmatic component that might play Gricean principles
off the syntactic component to `derive' properties of both the
reference-tracking features of the linguistic circumstances and their
relationship to syntactic structure"; that is exactly what Reinhart
has suggested for the disjoint reference facts, instead of trying to
force them into a purely syntactic theory. Further, note that it IS
generally assumed by Binding theorists that indices have an
interpretation, and this assumption has been the basis of many of the
judgments of (un)grammaticality where anaphoric relations are
involved; as some of the comments in this discussion (including the
one above from Peter) show, for most folks coindexation means
coreference and noncoindexation means disjoint coreference. First,
given the possibilities for anaphora in discourse, it is clearly wrong
to say that two NPs which are not coindexed are disjoint in reference.
And even the more plausible claim that coindexation means coreference
may well be wrong--Leslie Saxon has found cases in Dogrib (an
Athapaskan language) of "disjoint anaphors," pronouns which must be
bound within their governing category, like English reflexives, but
mean something like "someone other than the individual denoted by my
antecedent"; and my work on plural anaphors in distributive predicates
also challenges the coindexation-is-coreference assumption. Binding
seems to be a very abstract relationship, its interpretation
determined partly by lexical properties of the anaphors involved,
partly by operations (such as distributivity) on larger constituents
in which they occur. What I am saying, then, amounts to this: if
binding theory is to be EMPIRICALLY ADEQUATE (let alone explanatory),
then syntacticians must heed the semantic and pragmatic "components"
of a full theory of anaphora. Cooperation and mutual respect will
lead to better theories.
-------
-------
∂25-Nov-86 2157 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 5
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 21:56:59 PST
Date: Tue 25 Nov 86 16:15:21-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 5
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
------------------
DISTRIBUTIVITY
Craige Roberts, CSLI Postdoctoral Fellow
My work on distributivity grew out of a general interest in the
relationship between anaphora and operator scope, in the context of a
theory of discourse. The work of Heim (1982) and Kamp (1981), and my
extensions of Discourse Representation Theory to reflect the
phenomenon of Modal Subordination (see Roberts (1986)) all support a
simple generalization about anaphoric relations and referential
dependence more generally: an anaphoric element may take an NP as
antecedent only if any and all operators which have scope over the
potential antecedent have scope over the anaphor as well. Certain
phenomena associated with distributivity provide a challenge to this
hypothesis, and hence must be addressed in order to maintain it.
Further, the analysis of distributivity is a prerequisite to the
extension of this generalization to plural anaphora, as we shall see
below. Conversely, considering the distributive phenomena from the
perspective of a theory of anaphora in discourse provides insight into
the basic character of distributivity, and has led to a fairly simple
and general characterization of it which differs in important respects
from earlier theories (cf. Lakoff (1970), Bennett (1974), Scha (1981),
Link (1983), for example). Consider the following examples:
(1) Four men lifted a piano.
(2) Bill, Pete, Hank, and Dan lifted a piano.
(3) Bill, Pete, Hank, and Dan each lifted a piano.
(4) Each man lifted a piano.
(5) It was heavy.
(6) He developed a crick in his back later.
(7) They each developed a crick in their back later.
(1) and (2) are ambiguous in the same way. There is a group reading,
where together the four men lifted a single piano; and there is a
distributive reading, where each of the men in question has the
property of having (singlehandedly) lifted a piano. (In fact, there
are two distributive readings, one where the men each lifted the same
piano, and another where there may have been a different piano
involved in each lifting. For the purposes of this discussion, we
will ignore the first kind of reading, where the indefinite has wide
scope over the subject, and concentrate only on the other reading. In
fact, the difference is not crucial for the theory I propose, but
illustrates the important fact that distributivity is not reducible to
questions of NP scope.) (2) is ambiguous in the same way as (1).
(3), on the other hand, has only the distributive reading. And if we
assume that there are only four men, then the truth conditions for (4)
are identical to those for the distributive readings of (1), (2), and
(3), again ignoring the reading where the indefinite object has wide
scope over the subject. Now compare the anaphoric potential of the
NPs in these examples under various readings. On the group reading,
it is felicitous to follow (1) or (2) by (5) with `a piano' serving as
antecedent for `it', but on the distributive reading which interests
us, neither (1)+(5) nor (2)+(5) is felicitous; similarly (3)+(5) is
infelicitous, as is (4)+(5) on the intended reading of (4). The
parallel between the subject in (1) and the quantificational subject
of (4) tempts one to analyze the former as quantificational too. But
this would not solve the problem of the analysis of distributivity,
since the parallel extends to (2) and (3), with subjects which are
clearly nonquantificational. Further, the subjects of (1) to (3)
display a different anaphoric potential than that of (4). The latter
may not serve as an antecedent for anaphors in subsequent
sentences--witness the infelicity of (4) followed by (6). But the
subjects of (1) to (3) may serve as antecedents on any of their
possible readings; hence any of these examples may precede (7), with
THEY anaphoric to their subject.
One of the keys to the account of such examples is in the analysis of
(3). Dowty and Brody (1984) argue that this "floated" EACH is an
adverbial operator, which modifies the predicate to give a sense which
may be paraphrased, "this predicate is true of each of the members of
the group denoted by the subject." Here it is the adverbial operator
which introduces the universal quantificational force, rather than a
quantificational subject, as in (4). We may then capture the
parallels between (3) and the truth-conditionally equivalent
distributive reading of (2) by positing an implicit adverbial
distributivity operator in the latter example as well. The extension
of this treatment to (1) is then natural.
Now we can explain the anaphoric facts about (1) to (7) under the
hypothesis about anaphora mentioned above: On the group readings,
there are no operators in (1) or (2) which have scope over A PIANO,
and hence it is available to serve as an antecedent in discourse. But
on the intended distributive readings of these examples or of (3), the
indefinite object is under the scope of an adverbial operator which
does not have scope over any NPs outside of the sentence; thus A PIANO
may not serve as antecedent to a pronoun in a succeeding sentence, IT
in (5). This is the case with the indefinite in (4) as well, though
here the operator is the determiner of the subject rather than an
adverbial. In (4), this operator in the subject may not have scope
outside its immediate sentence, and this explains the infelicity of
anaphoric relations between its subject and that of (6). But the
subjects of (1) to (3) need not be quantificational themselves in
order to explain the quantificational force of the distributive
interpretation. If we assume that they are not quantificational and
are not under the scope of the distributivity operator, then we can
explain why, on the distributive reading of these examples, the
subjects are available to serve as antecedents for the subject of (7).
Adverbial distributivity need not apply only to VPs, but can apply to
derived predicates as well, as was noticed by Link (1986). So, for
example, in (8), it may be the case that three girls each received a
valentine from John:
(8) John gave a valentine to three girls.
The derived predicate here may be expressed by LAMBDAx(John gave a
valentine to x) or the related type in situation theory. If this is
modified by the distributivity operator and the result is predicated
of the group-denoting NP THREE GIRLS, we derive the intended
interpretation.
The view of distributivity sketched informally here contrasts with
earlier theories which in general either viewed the distributive-group
distinction as due to lexical properties of predicates or as arising
purely from properties of NPs (e.g., quantificational vs. referring).
In the work from which this brief summary is drawn, Roberts (1986), I
consider such theories in detail and show why none of them is
sufficiently general to account for the full range of distributive
phenomena.
This proposal also lays the groundwork for a simple theory of plural
anaphora, where plural as well as singular pronouns are treated as
simple bound variables. Thus, we have an account of examples such as
(9) (which might be uttered in the orthopedic ward of a hospital in
Colorado):
(9) These people broke their leg skiing.
The distributive reading of this example is strongly preferred, since
it is unlikely that the people broke a single, communal leg. And it
seems to mean that each person broke his or her own leg. Suppose that
adverbial distributivity applies here to a derived predicate LAMBDAx(x
broke x's leg), where the pronoun is treated as a variable bound by
the same operator as the subject role. When this modified predicate
applies to the group-denoting subject, the resulting interpretation
may be paraphrased, "each person in the group indicated has the
property of having broken his or her leg." Though the plural pronoun
here is bound by the subject, it is not coreferential with it, since
only the pronoun is under the scope of the distributivity operator.
Finally, if this theory is used in conjunction with a theory of the
semantics of plurality along lines suggested by Link (1983), we may
develop a simple and empirically adequate account of the Dependent
Plural phenomena. However, the details of this proposal, as well as
the formal details of the treatment of distributivity, must be omitted
here for reasons of space.
References:
Bennett, Michael R. 1974. "Some Extensions of a Montague Fragment
of English". Ph.D. dissertation, UCLA.
Dowty, David R., and Belinda Brodie. 1984. The Semantics of
"Floated" Quantifiers in a Transformationless Grammar. In Mark
Cobler, Susannah MacKaye, and Michael T. Wescoat (eds.),
"Proceedings of WCCFL III". The Stanford Linguistics Association,
Stanford University, pp. 75-90.
Lakoff, George. 1970. Linguistics and Natural Logic, "Synthese"
22:151-271. Reprinted in Donald Davidson and Gilbert Harmon (eds.),
"Semantics of Natural Language". Dordrecht: Reidel, 1972.
Link, Godehard. 1983. The Logical Analysis of Plurals and Mass Terms:
A Lattice-theoretical approach. In Rainer Bauerle, Christoph
Schwarze, and Arnim von Stechow (eds.), "Meaning, Use, and
Interpretation of Language". Berlin: de Gruyter.
Link, Godehard. 1986. Generalized Quantifiers and Plurals, manuscript,
University of Munich and CSLI, Stanford. To appear as CSLI Report
No. 66
Roberts, Craige. 1986. "Modal Subordination, Anaphora, and
Distributivity". Ph.D. dissertation, University of Massachusetts,
Amherst.
Scha, Remko. 1981. Distributive, Collective and Cumulative
Quantification. In Jeroen Groenendijk, Theo M. V. Janssen, and Martin
Stokhof (eds.), "Formal Methods in the Study of Language, Vol. I".
Mathematische Centrum, Amsterdam. Reprinted in Groenendijk, Janssen
and Stokhof (eds.), "Truth, Interpretation and Information".
Dordrecht: Foris, 1984.
-------
∂25-Nov-86 2307 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 6
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 23:07:34 PST
Date: Tue 25 Nov 86 16:16:31-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 6
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
------------------
REPRESENTATION: A PERSONAL VIEW
Adrian Cussins, CSLI Postdoctoral Fellow
Any proper theory of representation must draw a distinction between
cognitive and communicative representations. For without this
distinction we will not understand the different goals that a theory
of representation may have.
Anything at all can function as a communicative representation: morse
code, footprints in the sand, stick-figure drawings, computer icons,
smoke, noises and marks that we and other animals, and inanimate
things, make. All that is required is that one or more intentional
agents interpret the object or event, or that there be a convention of
interpretation within some community of intentional agents to
interpret the object or event. By contrast, only a very restricted
category of things can function as cognitive representations. My
current perceptual experience of a Xerox Dandelion is a cognitive
representation because its functioning as a representation does not
depend on interpretation by some intentional agent, or on a convention
of interpretation of some community of agents. Although I can
interpret my own experience (for example, when some aspect of it is
ambiguous), I do not have to do so for my experience to represent.
Whereas communicative representation must be interpreted or belong to
a convention of interpretation for it to represent.(1)
We normally specify what my experience is by means of conventional
devices, the functioning of which depends on other intentional agents.
But that is a quite separate point. A red traffic light represents
the command to stop only in virtue of a convention which governs
traffic lights. My perceptual experience, or thought, when confronted
with a traffic light, represents independently of any convention or
act of interpretation, even though we would normally specify what the
intentional state is of by means of the conventional linguistic phrase
"traffic light." It is, indeed, only because of the PRIMARY
representation of experience and thought (cognition) that the
DERIVATIVE representation of communicative signs is possible.
[Throughout I am treating language as Chomsky's E-language, a system
of linguistic communication (Chomsky 1986).] There can be a phrase of
the language, "traffic light" only because some members of the
community of language users are capable of thinking of traffic lights.
As we know from the phenomenon of the division of linguistic labor, as
well as other linguistic phenomena, it is not necessary that all
members of the linguistic community have concepts for all phrases of
the language, but it is necessary that for each phrase of the
language, some member of the community has the appropriate concepts.
A speaker may often exploit language to make a reference that he does
not himself understand, but as Evans writes (1982, p. 92), "Given the
divergence between the requirements for understanding and the
requirements for saying, it would be absurd to deny that our primary
interest ought to be in the more exigent conditions which are required
for understanding." This point holds quite generally for the use of
systems of communicative representation.
Communicative representation represents but it does so only in virtue
of the cognitive representation of one or more intentional agents or a
convention of interpretation which must itself be understood in terms
of the cognitive representations of the community which upholds the
convention. An understanding of how representation is possible must
ultimately rest on an understanding of how cognitive representation is
possible. A theory of cognitive representation is explanatorily
primary; a theory of communicative representation is explanatorily
derivative. This suggests that we ought to begin our theory of
representation with a theory of perception, memory, and thought (i.e.,
a theory of cognition) and only when we have such a theory will we be
able to provide a theory of language and other derivative
representation which exploits the cognitive representation of
interpreting agents (a theory of communication). Cognition is prior
to communication in the explanation of representation.
It is then a little alarming to discover that the vast majority of
work on representation depends on the reverse priority. When
specifying the content of cognitive representation one aims to capture
how things are from the agent's point of view, yet most theories will
simply specify cognitive content in terms of linguistic reference to
the objects or properties that the content is about. The content of
beliefs and the other attitudes is specified sententially by a "that
clause" and the content of perceptual experience is specified by
linguistic reference to objects and properties that the experience is
of, NOT ONLY IN OUR EVERYDAY COMMUNICATION, BUT AS A THEORETICAL
SPECIFICATION WHICH IS A PART OF A GENERAL THEORY OF COGNITIVE
CONTENT. Cognitive content is generally specified by theorists of
representation by means of linguistic reference to the world, as if a
theory of linguistic communication was explanatorily prior to a theory
of cognition.
Do not mistake my point. Cognitive contents may be specified
correctly by means of linguistic (or, in general, communicative)
reference to the world of the agent; we so specify them on most
occasions when we communicate about our own or others' mental states.
I call such specification of content "conceptual specification of
content." But the goal of the theoretician of representation is quite
different. His goal is not communication, but to understand how it is
possible for physical systems to represent the world. For example, in
certain cases the theoretician should capture what is available in the
experience of the agent as a disposition (see Evans (1982), chapter
6). When the theoretician characterizes the disposition in language,
as of course he must, there need be no presupposition that the agent
understands that bit of language, or that the theoretician must
explain what it is for language to function, independently of a theory
of cognition. All that is presupposed would be a theory of what it is
for organisms to possess dispositions; a presupposition which is
entirely innocent from the point of view of cognitive theory. But if
the theoretician specifies the cognitive contents of (i.e., what is
available to) intentional agents directly by means of linguistic
reference (or reference in some other system of communicative
representation) to what the content is about, then he must suppose
that a theory of communicative representation is explanatorily prior
to a theory of cognitive representation. And so much the worse for
his enterprise. For, if we cannot explain how linguistic (or, in
general, communicative) reference is possible for physical systems in
terms of a prior theory of cognitive representation, then where else
are we to turn?
-- Theories of the causal relation between the use of some
communicative representation and bits of the world? But nobody has
any idea as to how such causal relations could be specified
noncircularly. (For an excellent criticism of causal theories of
intentionality which such a theorist would be committed to, see Evans
(1982) chapter 3 and part 2.)
-- Information-based theories of the information that utterances
carry about the world? But if the theory of linguistic reference is
explanatorily prior to the theory of cognitive reference, and the
theory of linguistic reference is information-based, then the theory
of cognitive reference will have to be information-based. And it has
been clear for a long time that the standard notion of
information-transmitted (rather than a notion of information which is
cognitively available) is an inadequate basis for a theory of
cognition. (Most recently, see Fodor (1986).) What, in effect, all
modern information-based theories of representation do is introduce a
new notion of information - not the standard notion- which is the
output from processes of "attunement" or "digitalization," and thus
which is a notion of representation rather than information (Dretske
1981, Barwise and Perry 1983). Since such processes are to be
understood by means of a cognitive (psychological) theory, the notion
of nonstandard information, which is a notion of a representational
state output from the processes of digitalization/attunement, is
itself a cognitive notion. Despite the misleading terminology, it
cannot be used to ground a theory of linguistic representation which
is explanatorily prior to a theory of cognitive representation.
And there is a further problem with (standard) information-based
theories of representation. Information exists in the world because
of relations of constraint between situations in the world (let's
suppose). These relations of constraint are supposed to hold, and to
be explained, quite independently of (a theory of) the
representational activities of intentional agents. If we are to
provide a theory of communicative representation, which is independent
of a theory of cognition, in terms of standard information, then we
must suppose that the explanation of what it is for a situation of a
given type to obtain in the world is independent of the theory of
representation, both communicative and cognitive. But the vast
implausibility of this position was the downfall of early,
Austin-style, theories of correspondence. Facts just are true
thoughts, or, what is expressed by true sentences, etc. To suppose
that we have independent access to our thoughts, to the world, and to
some relation of correspondence (or noncorrespondence) between the two
is epistemologically incoherent. We think, and thereby have access to
the world. Our grasp of the true/false distinction is not a result of
our having independent access to the world, and to our representations
and a discovery of the difference between the latter's corresponding
to the former and its failing to so correspond. Our conception of the
world, and our conception of the true/false distinction are a joint,
and inseparable, product of our cognitive development. An explanation
of what it is to have a conception of the world just is an explanation
of what it is to grasp the distinction between true and false.
Nor can one appeal to the physical sciences in support of a tripartite
conception which involves three theories (each lower-numbered theory
being independent of each higher-numbered theory): (1) a physical
science theory of the world as-it-is-in-itself, including constraints
between physical situations, and thus information, (2) a theory of
communication, including linguistic communication, in terms of the
information carried by uses of communicative representations in
context, and (3) a theory of cognition. This tripartite conception
would support a unidirectional independence between the physical
sciences, the linguistic sciences and the psychological sciences. The
physical sciences could work in independence of the other two
categories of science. The linguistic sciences could take over a
notion of information from the physical sciences and use it to
characterize the functioning of communicative representational systems
in independence from the cognitive constructs of psychology.
Under this conception the scarcity of psychologists at CSLI would make
a lot of sense, for the functioning of communication could be studied
independently of the functioning of mind. A multidisciplinary
psychological center would require the presence of linguists but a
multidisciplinary center for the study of language (the center) would
not require the presence of psychologists. But, as I said, the
tripartite conception is unsupported. The physical sciences provide
theories of the nature of atoms and molecules, not of the nature of
tables, chairs, Xerox Dandelions, traffic lights, or people. Given
the restriction to the resources of the physical sciences, there is no
closed specification which picks out all and only the chairs in the
universe. The physical sciences cannot explain what it is for there
to be a chair, or a ... , in the universe, even if it can explain what
it is for there to be the elements out of which chairs are
constructed. But the kind of information that we need for a theory of
communication is information about things like chairs, not merely
information about the mereological constituents of chairs. So even if
we could support the tripartite conception for thought and talk about
mathematics and the physical sciences (which in any case I doubt), we
could not support it for the vast majority of our thought and talk.
The world just is what is presented to us in our perception and in our
thinking, so a theory of thought and a theory of the world must be
interdependent.
The interdependence of a theory of thinking and a theory of what we
think about means that a theory of thinking must not presuppose a
theory of what it is that we think about, for that would force a
theory of what it is that we think about to be independent of a theory
of cognition. A theory of thinking would presuppose a theory of what
it is that we think about if it employed what I called "conceptual
specifications" of the content of thinkings. For a conceptual
specification of the content of a thinking specifies the content in
terms of the objects and properties of the world that the thinking is
about. If our theory of cognition took conceptual specifications as
basic, then it would have to presuppose a theory of what it is for
there to be such objects and properties. There would be no room for
illumination of what it is for there to be such objects and properties
from a theory of cognition.(2)
There is yet a further problem. Not only must we not presuppose the
theory of objects and properties that our cognizings are about, but we
must also not presuppose the possession of concepts of those objects
and properties by the subjects of cognition. Our aim is to explain
what it is for organisms to possess concepts, not to describe general
features of cognition given that the cognition is assumed to be
already conceptual. As the traditional philosophical project of
providing definitions was constrained to provide noncircular
definitions, so the epistemic project of explaining what it is for an
organism to understand and think is constrained not to presuppose the
possession of concepts by the organism. Now, "concept" is a word
which is ill-regarded around here, so I shall be excused for spending
a few paragraphs in its defense.
It is one of the great sources of philosophical wonder that there
exists not just the world but perspectives on the world; that in the
world are things which think about the world. It's as if we feel we
understand in principle (if not in detail) how physical and biological
evolution could produce a world of objects that bear causal relations
to each other, but not how it could produce a world of objects which
reflect on those causal relations. Concepts are abilities of
(certain) organisms in virtue of which they can think about the world.
Hence the possession of concepts by organisms is a source of wonder,
and a challenge to the project of naturalism.
The behavior of most objects in the world can be understood by
adopting the "physical" or "design" stance (to use Dennett's (1978)
terminology) towards them, but without adopting the "intentional
stance." We can understand the behavior of a conventional chess
playing computer in terms of a procedural consequence specification of
the program, so long as there is no malfunction. We can also adopt
the intentional stance towards the computer, as when we think that the
computer "believes that it is good to get its queen out early," but
the point is that we don't have to do so in order to understand the
behavior of the machine (although we do have to do so in order to
understand why we might wish to build such machines). Although the
programmer may be guided in the design of the machine by his adoption
of the intentional stance, it is not necessary to adopt this stance in
order to understand what it is that he has designed. This is what it
means to treat the intentional stance "instrumentalistically." But
there are some creatures - us humans, at least - the (majority of the)
behavior of which must be understood by adopting the intentional
stance. The attribution of intentional states to adult humans is
realistic; that is, the purpose of the attribution is not just the
prediction of sequences of behavior in a given, constrained, context,
but the explanation of the causation of that behavior. We, unlike
overhead projectors, frogs, or conventional computers, act out of our
beliefs, thoughts, memories, perceptions, and imaginations. Our
behavior, unlike the frog's, is not as it is merely because the world
is a certain way, but also because we believe it, remember it, desire
it, and imagine it to be a certain way. Were it not for this, there
would be no genuine basis for the distinction between adopting the
moral stance towards things like us, but not adopting it towards
overhead projectors. It is because the attribution of mental states
to flies is to be construed instrumentalistically, and the attribution
of mental states to humans (and others) is to be construed
realistically, that it is all right to swat flies but it is not all
right to swat humans (and others). The cognitive challenge to
naturalism is to show how human representation is so extraordinarily
and wonderfully different from frog representation, even though frogs
and humans are both products of the identical processes of natural
selection.
The point of all this is to make the absurdity of the realistic
ascription of concepts to screwdrivers (or GM robot welders) more than
usually apparent. We need an account of concept possession that makes
sense of the different attitudes we adopt towards things like
screwdrivers and other nonconcept-exercising things and things which,
like us, have concepts and, thus, a world about which we think. If it
made sense to ascribe concepts to screwdrivers (or robot welders) it
would have to make sense to ascribe just one or two concepts to a
thing. But then it would have to make sense to think about a world
which just contained screws, and two properties, screwed or unscrewed.
But screws can only be part of a world in which there are factories
which make them, people who need them, properties of rigidity, etc.,
which are required for them to work, directions in which they are
screwed, locations where they are, ... Nor will it do to say that the
concept of a screw which a screwdriver or robotwelder has is not our
concept of a screw; for the concept of a screw just is our concept of
a screw, an object which makes sense, and has its identity, in our
world. If screwdrivers could talk, we would not understand what they
said.
So, if the program of naturalism is to make room for conceptions of
the world, we will do well to explain how it is possible for merely
physical organisms to possess concepts. But if our scientific
psychology adopts conceptual specifications in its theory of the
representational abilities of organisms, then concept possession will
have been presupposed and no dent made on the challenge to naturalism.
We need scientific psychology to employ nonconceptual specifications
of the cognitive representational states of organisms which are such
that: (a) we understand what it is for physical systems to be in
states thus described, and (b) we understand why it is that being in
states so described is what it is to possess concepts (have a
conception of the world). As I argue in my thesis, this project is
possible because, and only because, the constitutive structure of
thought is its nonconceptual structure.
The nonconceptual theoretical specification of content is in terms of
psychological mechanisms -- mechanisms the possession of which does
not presuppose the ability to refer. There is no reason at all why as
theorists of content we must adopt the same specifications of content
as we use in everyday communication about people's attitudes and
experiences, or technical extensions of such specifications. And good
reason not to, since we would leave a mystery the central challenge of
naturalism. The general moral is that communicative representation is
explanatorily derivative upon cognitive representation,
nonconceptually specified. A scientific psychology of cognition which
does not presuppose the possession of concepts is explanatorily prior
to any theory of communication, including linguistic theories of
natural language.
Wouldn't it be great if at CSLI we had something to say about a theory
of how organisms represent, which neither presupposes a theory of what
it is that we think about nor presupposes a theory of what it is for
organisms to possess concepts! If we don't, it will be a shame
because we won't have much of substance to say about communicative
representation either.(3)
Notes:
(1) There is little need for these purposes to draw any distinction
between communicative representation and what one might call
"functional" or "teleological" representation. Functional
representation is representation which is assigned to a piece of
mechanism by an interpreter in order to understand better how that bit
of mechanism functions in the context of the system of which it is a
part. For example, we might assign the representation of the speed of
sound to the neural mechanism of auditory localization. Teleological
representation is representation which is assigned to a system in
order to better understand why a system has been designed or why it
has evolved. Both of these types of representation are classified as
"communicative" here, even though their function is not for
communication.
(2) Whereas a dispositional theory of content, for example, would not
presuppose a cognitively independent theory of what it is that we
think about. It would merely presuppose the world.
(3) Thanks to Brian Smith, Craige Roberts, and Susan Stucky for their
comments.
References:
Barwise, J. and J. Perry. 1983. Situations and Attitudes. Cambridge:
MIT Press.
Chomsky, N. 1986. Knowledge of Language: Its Nature, Origin and Use.
Praeger.
Dennett, D. 1978. Brainstorms: Philosophical Essays on Mind and
Psychology. Cambridge: MIT Press.
Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge:
MIT Press.
Evans, G. 1982. The Varieties of Reference. Oxford University Press.
Fodor, J. A. 1986. Information and Association, Notre Dame Journal of
Formal Logic 27.
-------
∂25-Nov-86 2346 EMMA@CSLI.STANFORD.EDU CSLI Monthly, 2:2 part 7 and last
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 25 Nov 86 23:46:02 PST
Date: Tue 25 Nov 86 16:17:22-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Monthly, 2:2 part 7 and last
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
------------------
SYMBOLIC SYSTEMS PROGRAM
Helen Nissenbaum
Stanford has a new undergraduate major, one with close ties to CSLI.
From its quiet start in September of this year, the Symbolic Systems
Program (SSP) has enjoyed steady growth. Fifteen students, most of
them juniors, have already enrolled as majors. Typically these
juniors consider the program "a godsend," giving them a program of
study which was just what they wanted but were unable to find before.
Among sophomores, who are just beginning to think about their majors,
there also seems to be a lot of serious interest.
SSP offers students the opportunity to explore the way people and
machines use symbols to cope with the world. Key notions are symbol,
representation, information, intelligence, action, and language. By
requiring course work in the departments of Computer Science,
Linguistics, Philosophy, and Psychology the curriculum is designed to
show how these notions are approached from a variety of perspectives
including those of artificial intelligence, computer science,
cognitive psychology, linguistics, philosophy, and symbolic logic.
Each Symbolic Systems major completes a core of eleven required
courses. The four in computer science include theories of
computation, topics in AI, and the basics of machine and assembly
language, and provide considerable training in actual programming.
Two linguistics courses introduce students to theories of syntax,
semantics, and pragmatics. Students take a sequence of two logic
courses. Two philosophy courses cover many of the central topics in
traditional analytical philosophy with an emphasis on philosophy of
language and philosophy of mind. The psychology requirement is in
cognitive psychology.
In addition to the core, majors select an area of concentration in
which they complete an additional five courses. The idea of the
concentration is to encourage students to develop an area of expertise
that is consistent with their interests and long-term goals. Students
may select from the predesigned concentrations in artificial
intelligence, cognitive science, computation, logic, natural language,
philosophical foundations, semantics, and speech; or they may design
their own. Although most current majors have adopted the predesigned
concentrations (some with minor changes), there are some individually
designed concentrations, including one in computer music and one in
psychobiology.
The program has a large and diverse faculty committee, comprising
faculty from the affiliated departments (of Computer Science,
Linguistics, Philosophy, and Psychology) and consulting faculty from
industrial research centers in the Bay Area (SRI International,
Schlumberger, and Xerox PARC). The faculty participate in a variety
of ways: advising students, teaching courses, and making decisions
about the curriculum to steer the intellectual course of the program.
In the winter quarter of 1987, the Symbolic Systems Program will offer
its first course, called "Introduction to Information and
Intelligence." This is a survey of the program's subject area, given
as a series of exploratory self-contained lectures. Lectures will be
given by members of the program's committee. The course will be given
at a campus location as well as broadcast on the air by Stanford's
Instructional TV Network. Several additional courses are being
developed for the major including undergraduate offerings in
philosophy of language, computational linguistics, the semantics of
programming languages, and ethical issues in the uses of computers.
The Symbolic Systems Program has several ties to CSLI. Most
important, of course, is a curriculum which reflects CSLI's
intellectual direction. Consequently, the program's faculty committee
is made up almost entirely of CSLI affiliates, both regular Stanford
faculty and consulting faculty from industry. SSP is directed by Jon
Barwise, the first director of CSLI, and is coordinated by Helen
Nissenbaum, one of CSLI's first postdoctoral fellows. In addition,
CSLI provided important support while the program was being
established. In particular, Tom Wasow, one of the current directors of
CSLI, led the drive to get the program approved by the Stanford
administration and faculty.
It is the hope that this program will inspire similar programs at
other universities around the world, programs that will contribute to
the training of researchers in language and information. Any readers
who would like more information about the program should call the
program office at (415) 723-4091, or write: Symbolic Systems Program,
62H Building 60, Stanford University, Stanford, CA 94305.
------------------
NEW CSLI PUBLICATIONS
61. D-PATR: A Development Environment for Unification-based Grammars
Lauri Karttunen
62. A Sheaf-Theoretic Model of Concurrency
Luis F. Monteiro and Fernando C. N. Pereira
63. Discourse, Anaphora and Parsing
Mark Johnson and Ewan Klein
64. Tarski on Truth and Logical Consequence
John Etchemendy
CSLI Reports and a complete list of publications can be obtained by
writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford, CA 94305, or
Trudy@CSLI.STANFORD.EDU.
------------------
NOTICED IN HARVARD MAGAZINE
November-December 1986
In the Books and Authors section, listed under `Political Science':
Noam Chomsky, Gj '51-'55, Barriers, M.I.T., $17.50 (paper, $7.95).
Exploration of complex questions concerning theories of government and
including the possibility of a unified approach.
----------------------------------------------------------------------
Editor's note
Selected commentary about Monthly articles or other matters will be
published in future issues. Please send correspondence to the Editor
of the Monthly at CSLI or by electronic mail to
Monthly-Editor@csli.stanford.edu.
----------------------------------------------------------------------
- Elizabeth Macken
Editor
-------
∂03-Dec-86 1753 EMMA@CSLI.STANFORD.EDU CSLI Calendar, December 4, No. 9
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 3 Dec 86 17:53:11 PST
Date: Wed 3 Dec 86 16:41:57-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, December 4, No. 9
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
_____________________________________________________________________________
December 4, 1986 Stanford Vol. 2, No. 9
_____________________________________________________________________________
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
____________
CSLI ACTIVITIES FOR THIS THURSDAY, December 4, 1986
12 noon TINLunch
Ventura Hall Reading: What to do with theta-Roles?
Conference Room by B. Levin and M. Rappaport
Discussion led by Annie Zaenen
(Zaenen.pa@xerox.com)
Abstract in the this Calendar
2:15 p.m. CSLI Seminar
Redwood Hall Rational Behavior in Resource-bounded Agents
Room G-19 David Israel
(Israel@csli.stanford.edu)
Abstract in the this Calendar
3:30 p.m. Tea
Ventura Hall
4:15 p.m. CSLI Talk
Redwood Hall Rational Speech Activity: The Case of Discourse
Room G-19 Politeness
Professor Asa Kasher
University of Tel Aviv, Dept. of Philosophy
Abstract in the this Calendar
____________
CSLI ACTIVITIES FOR NEXT THURSDAY, December 11, 1986
12 noon TINLunch
Ventura Hall Reading: Differences in Rule Type and their
Conference Room Structural Basis
by Stephen R. Anderson
Discussion led by Donald Churma
(Churma@csli.stanford.edu)
Abstract in the next Calendar
2:15 p.m. CSLI Seminar
Redwood Hall Rational Agency
Room G-19 Phil Cohen
(pcohen@sri-warbucks.arpa)
Abstract in the next Calendar
3:30 p.m. Tea
Ventura Hall
--------------
THIS WEEK'S TINLUNCH
Reading: What to do with theta-Roles?
Discussion led by Annie Zaenen
December 4
When Extended Standard Theory won the linguistic wars (Newmeyer's
version of linguistic history), lexical semantics went out of fashion
in mainstream generative grammar but, as is often the case with
victories that are the results of power politics rather than reason,
the problems that were raised in the generative semantics research
remained unsolved and recent years have seen them resurface. At this
point several attempts to specify the role of lexical semantics in
syntax are under elaboration. Among the debated issues are (1) the way
semantic information has to be represented in the lexicon; (2) the
number and the properties of the levels of representation needed to
link semantics and syntax.
The paper tries to give a partial answer to these questions from a
Government Binding related view. I choose it because that point of
view will most likely not be widely represented among the live
participants at the TINLunch. The main purpose of the lunch should be
a discussion of the general issues raised in the paper rather than a
critique of the paper itself.
Other relevant recent writings on the topic include: Dowty (1986):
On the semantic content of thematic roles; Jackendoff (1986): The
status of Thematic Relations in Linguistic Theory; Foley and Van Valin
(1984): Functional Syntax and Universal Grammar; and Kiparsky's
manuscript on Morphosyntax.
--------------
THIS WEEK'S SEMINAR
Rational Behavior in Resource-bounded Agents
David Israel
December 4
Members of the Rational Agency Project at CSLI (RatAg) have been
involved in research to develop an architecture for the production of
rational behavior in resource-bounded agents. The overall aim of this
work is to combine techniques that have been constructed in artificial
intelligence for automating means-end reasoning with a computational
instantiation of techniques that have been developed in decision
theory for weighing alternative courses of action. The focus is on
ensuring that the resulting synthesis is a viable architecture for
agents who, like humans and robots, are resource-bounded, i.e., unable
to perform arbitrarily large computations in constant time.
Predicating the architecture on the fact that agents have resource
bounds will enable its use both as a device for producing rational
behavior in robots that are situated in dynamic, real-world
environments, and as a model of human rational behavior. In taking
seriously the problem of resource boundedness, we draw heavily on the
view of plans as ``filters'' on practical reasoning. We are concerned
with determining what regularities there are in the relationship
between an agent and her environment that can be exploited in the
design of the filtering process.
--------------
THIS WEEK'S COLLOQUIUM
Rational Speech Activity: The Case of Discourse Politeness
Asa Kasher
December 4
The paper will briefly outline the role to be played by rationality
considerations in governing understanding and production of speech
acts. It will be argued that a certain aspect of rationality
considerations, namely cost, has been neglected. Its importance will
be demonstrated in the case of discourse politeness as well as in some
apparent counter-examples to Grice's conversational maxims.
-------
∂10-Dec-86 1826 EMMA@CSLI.STANFORD.EDU CSLI Calendar, December 11, No. 10
Received: from CSLI.STANFORD.EDU by SAIL.STANFORD.EDU with TCP; 10 Dec 86 18:26:03 PST
Date: Wed 10 Dec 86 16:58:14-PST
From: Emma Pease <Emma@CSLI.STANFORD.EDU>
Subject: CSLI Calendar, December 11, No. 10
To: friends@CSLI.STANFORD.EDU
Tel: (415) 723-3561
C S L I C A L E N D A R O F P U B L I C E V E N T S
_____________________________________________________________________________
December 11, 1986 Stanford Vol. 2, No. 10
_____________________________________________________________________________
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
____________
CSLI ACTIVITIES FOR THIS THURSDAY, December 11, 1986
12 noon TINLunch
Ventura Hall Reading: Differences in Rule Type and their
Conference Room Structural Basis
by Stephen R. Anderson
Discussion led by Donald Churma
(Churma@csli.stanford.edu)
Abstract in this Calendar
2:15 p.m. CSLI Seminar
Redwood Hall Persistence, Intention, and Commitment
Room G-19 Phil Cohen
(pcohen@sri-warbucks.arpa)
Abstract in this Calendar
3:30 p.m. Tea
Ventura Hall
____________
CSLI ACTIVITIES FOR THURSDAY, JANUARY, 8, 1987
12 noon TINLunch
Ventura Hall Resurrection of Metaphors -- A Tool for
Conference Room Transdisciplinary Migration
Discussion led by Egon Loebner
(Loebner%hp-thor@hplabs.hp.com)
Abstract in this Calendar
2:15 p.m. CSLI Seminar
Redwood Hall No Seminar
Room G-19
3:30 p.m. Tea
Ventura Hall
--------------
ANNOUNCEMENT
There will be no TINLunch, Seminar, or Calendar on December 18, 25 and
on January 1 because of the University break. TINLunch and the
Calendar will resume on January 8 and the Seminar on January 15.
--------------
THIS WEEK'S TINLUNCH
Reading: Differences in Rule Type and their Structural Basis
Discussion led by Donald Churma
December 11
Anderson is arguing, in somewhat programmatic fashion, for what is in
effect a fairly highly modularized view of phonology (although he
doesn't use this term). Essentially, he views phonology as having
three modules, one in which the metrical formalism is appropriate (in
which apparently only stress and syllabification belong), one where
things are done autosegmentally (tone, nasality, etc.), and one that
contains only `Garden-Variety' phonological rules (dissimilation,
funky morphophonemic rules, (controversially) vowel harmony). The
argument is basically the standard Chomsky/Pullum/Zwicky one that
modularization allows for a more constrained theory. Curiously, this
paper has had little or no effect on subsequent phonological practice.
Why?
--------------
THIS WEEK'S SEMINAR
Persistence, Intention, and Commitment
Phil Cohen
December 11
This talk, presenting joint work with Hector Levesque (University of
Toronto), establishes basic principles governing the rational balance
among an agent's beliefs, actions, and intentions. Such principles
provide specifications for artificial agents, and approximate a theory
of human action (as philosophers use the term). By making explicit
the conditions under which an agent can drop his goals, i.e., by
specifying how the agent is `committed' to his goals, the
formalism captures a number of important properties of intention.
Specifically, the formalism provides analyses for Bratman's three
characteristic functional roles played by intentions, and shows how
agents can avoid intending all the foreseen side-effects of what they
actually intend. Finally, the analysis shows how intentions can be
adopted relative to a background of relevant beliefs and other
intentions or goals. By relativizing one agent's intentions in terms
of beliefs about another agent's intentions (or beliefs), we derive a
preliminary account of interpersonal commitments.
--------------
MORPHOLOGY/SYNTAX/DISCOURSE INTERACTIONS GROUP
Diachronic Processes in the Evolution of Reflexives
Suzanne Kemmer
Kemmer@csli.stanford.edu
12:30, Monday, December 15, Ventura Conference Room
The historical development of reflexive morphemes into middle voice
markers (roughly, markers of subject-affectedness) is well-attested in
a wide range of languages. This talk concentrates on what I call
`two-form systems', i.e., languages which apparently have two
reflexive markers, a full and a reduced form (e.g., Icelandic,
Russian, Djola). I discuss some ways in which cross-linguistic
generalizations about these languages bear on issues of
representation.
Despite the similarity of these systems from a synchronic
perspective, it turns out that they can develop via two distinct
diachronic processes. In one, an original reflexive splits into two
formally and functionally distinct forms; in the other the reflexive
function is renewed by a new marker while the old reflexive becomes a
middle marker. The typological and diachronic evidence, taken
together, present a coherent picture of the relation between reflexive
and non-reflexive middle semantics.
--------------
NEXT TINLUNCH
Resurrection of Metaphors
A Tool for Transdisciplinary Migration
Egon E. Loebner
System Performance Center
Hewlett-Packard Laboratories
January 8, 1987
It is proposed that some techniques which can accelerate entry into a
second scientific professional practice are analogous to the well
established deductive techniques by which many adults approach the
acquisition of a second language in a deliberate fashion. A
successful migration from one language community to another relies on
the transference of linguistic, cognitive and societal skills of
individuals from one system to a different system, which nevertheless
shares many linguistic and cultural universals with the former system.
The claim put forward here is that the very same skills are
transferred during transdisciplinary migration.
Language acquisition data, collected on four continents, strongly
suggest that "being bilingual can have tremendous advantages not only
in terms of language competencies but also in terms of cognitive and
social development" (W. E. Lambert, 1981, NYAS, Vol. 379, pp. 9-22).
I believe that becoming multidisciplinary can lead to similar
advantages in terms of professional and scientific competencies and
can induce an expanded metadisciplinary development of cognitive and
communicative skills.
The talk concentrates on the role that can be played by a
remarkable analogy, invented 131 years ago, by the world's master
builder of theory construction, James Clerk Maxwell. He defined it as
"that partial similarity between the laws of one science and those of
another which makes each of them illustrate the other". I plan to
show how such partial similarities can be extracted using textual
analyses of now dead metaphors which, while alive, aided theory
construction by, in the words of T. S. Kuhn, "calling forth a network
of similarities which help to determine the way in which (scientific)
language attaches to the world". Buttressing my argument through
reference to recent findings of linguists, philosophers,
psychologists, and educators on the role of metaphor in theory
construction and reconstruction, I plan to argue that dead metaphors
in unrelated fields are relatable if their metaphoricity had a common
origin and that these interrelations constitute a transformational
grammar that can assist in interpreting concepts of one field in terms
of the other field.
Finally I wish to suggest that the transdisciplinary migration
technique can not only enhance new discipline acquisition but can also
provide the metascientific means to integrate and unify practices and
theories in different branches of science, even in those that appear
to be quite remote at this point in history.
-------